GPT-3.5-Turbo - Unable to prompt engineer Fine-tuned model

I have fine-tuned gpt-3.5-turbo on 5000 conversations from a dataset with a collection of restaurant reservation conversations. I have used the default settings for fine tuning.

I now want to expand the capability of the model to ask the caller’s email id before ending the conversation ( not available in the dataset conversations). I have tried changing the prompts but no matter what prompt I use the model only responds the way of the conversation flow in the dataset. How do I address this issue? Does fine tuning cause the model to be inflexible?

The term is overfitting.

Yes, this can happen. Especially with the default epochs parameters, which OpenAI seeming set high enough to allow small fine-tune training files to overcome the massive chat tuning that gpt-3.5-turbo comes with.

You might have included a system prompt that you also use in practice. One way you could break away from the fine-tune in select instances is to fill the AI context with a whole new system prompt that acts as a different identity.

The actual behavior you want, the AI randomly interjecting “by the way, what’s your email” is its own problem, and one that is not typical of a chatbot. You could try your techniques for that on plain gpt-3.5-turbo and see if that isn’t its own basket of kittens.