Fine-Tuned model ignores all instructions/prompts from training data

I’m having issues with training GPT-3.5-Turbo. Following the documentation, I created a file with approximately 65 examples. However, when I send a message to the model, it disregards the instructions in the prompt and doesn’t adhere to any of the outputs from my training data. The model begins creating descriptions based on the data I provide, even though none of my prompts or training data instructs it to do so. When I use one of the user content messages from my training data—without adding any other examples—the playground model returns precisely what I want. So, I’m left wondering: what could be going wrong? Am I missing something, or has anyone else faced this issue and found a solution? The primary reason I’m fine-tuning is to format the data in a more predictable manner.

Hi and welcome to the Developer Forum!

Some points to consider, 65 is not a lot of examples in training, if you wish to have a comprehensive and robust training set then examples sets that contain thousands and even 10’s of thousands of examples are preferable.

Also the nature and format of your training data can have a significant affect on performance, can you post some example entries in your training set?

1 Like