Fine-tuned davinci - messed up completion

Hi,

I’ve fine-tuned the davinci model with specific data. For testing purposes, I started with 50 examples, not hoping for a perfect response but a slightly different response than available in the default davinci. I was hoping that my data would complement davinci. However, it completely messed up the responses. The following are responses of default davinci, fine-tuned davinci

Default davinci model’s response:

Prompt: Differentiated topics for ed-tech startup teaching machine learning engineering courses?

Completion:

  1. Robotics: Designing and Developing Autonomous Agents
  2. Natural Language Processing: Building an Intelligent Chatbot
  3. Computer Vision: Enhancing Image Recognition
  4. Deep Learning: Building Neural Networks

Response of a fine-tuned davinci for the same above prompt
{“id”:“cmpl-6sasaYkHtxnKMwWMkQxQtFFoGRzfA”,“object”:“text_completion”,“created”:1678469496,“model”:“davinci:ft-personal:product-gpt-03102023-0728am-2023-03-10-15-36-51”,“choices”:[{“text”:" product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup for Machine Learning product opportunities for an Edtech startup Learning",“index”:0,“logprobs”:null,“finish_reason”:“length”}],“usage”:{“prompt_tokens”:13,“completion_tokens”:200,“total_tokens”:213}}

When I tried on Play Ground, it gave me a similar messed-up response. Later, no response at all. It crashed.

Any help is appreciated! I am trying to generate prompt responses with some elements of my data feed that I discovered from user reviews, user feedback, and other social channels.

You are missing a separator between the prompt and completion, like -> or ### so the model doesn’t understand that it’s supposed to start the completion. Instead it’s continuing your prompt.

The separator needs to be in the prompt training data and in the playground prompt.