Fine-Tuned Model Not Responding with Expected Answers

Hello everyone,

I am currently facing an issue with my fine-tuned model on OpenAI’s platform. After training the model using a dataset that includes specific Q&A pairs about my company, I expected the model to respond accurately to related queries. However, when I ask questions like “Who is the marketing expert in your company?”, the model provides generic or unrelated responses, such as saying the expert is an AI-designed persona created by OpenAI.

Here are the steps I followed:

  1. I created a fine-tuning dataset in the required chat format, including multiple variations of the question about the marketing expert.
  2. I ensured that the model ID in my API calls is correct and corresponds to the fine-tuned version.
  3. I added a system message in my prompts to reinforce the context.
  4. Despite these adjustments, the model still does not utilize the trained data effectively.

I would appreciate any insights or suggestions on how to resolve this issue. Has anyone else encountered a similar problem? What steps can I take to ensure the model provides the expected responses based on the fine-tuning data?

Thank you for your help!

1 Like

Fine tuning is intended to allow the model understand how you want your answers. To get the model understand what you need in your answers you would have to basically rebuild the whole model…

The solution to your case would be RAG engine connected to the data about your company so that the model can use that information to form your answers. Fine tuning and this scale is not adopted for your use case.

1 Like

If you are following all guidelines and expectations are high I would try:

  • not gpt-4o-mini, which is ultra-small only made possible by OpenAI’s overtraining, but the higher-quality gpt-3.5-turbo-0613 as base, or ultimately gpt-4o-2024-08-06 (the 2024-05-13 version would really be what is desired, but unavailable).
  • increase the learning multiplier hyperparameter, which is cheaper than running more epochs to deepen the weight of training.
1 Like