Hi,
I’ve fine-tuned the davinci model with specific data. For testing purposes, I started with 50 examples, not hoping for a perfect response but a slightly different response than available in the default davinci. I was hoping that my data would complement davinci. However, it completely messed up the responses. The following are responses of default davinci, fine-tuned davinci
Default davinci model’s response:
Prompt: Differentiated topics for ed-tech startup teaching machine learning engineering courses?
Completion:
- Robotics: Designing and Developing Autonomous Agents
- Natural Language Processing: Building an Intelligent Chatbot
- Computer Vision: Enhancing Image Recognition
- Deep Learning: Building Neural Networks
- …
Response of a fine-tuned davinci for the same above prompt
{“id”:“cmpl-6sasaYkHtxnKMwWMkQxQtFFoGRzfA”,“object”:“text_completion”,“created”:1678469496,“model”:“davinci:ft-personal:product-gpt-03102023-0728am-2023-03-10-15-36-51”,“choices”:[{“text”:" product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup Learning",“index”:0,“logprobs”:null,“finish_reason”:“length”}],“usage”:{“prompt_tokens”:13,“completion_tokens”:200,“total_tokens”:213}}
When I tried on Play Ground, it gave me a similar messed-up response. Later, no response at all. It crashed.
Any help is appreciated! I am trying to generate prompt responses with some elements of my data feed that I discovered from user reviews, user feedback, and other social channels.