I’m having a problem with the context given when I fine-tune a model. I’m using gpt 3.5-turbo (and uploading it by using a jsonl file, with a system content, user content and assistant content). It does my new fine-tuned model perfectly, but it only really works when I use it on my OpenAI playground, but when I try to use it in my web and I ask for something specific that I specified in the fine-tuning the AI anwers me like gpt 3.5 does and not like my fine-tuned model. Also, in the playground only works when I write the context manually, but if I don’t specify any context, It doesn’t answer as I want.
I also tried using a short context, and happens the same.
Here’s and example comparing both models, one with a context written manually, and then another without any context:
Also, when I ask something that I specified while doing the fine-tuning (1+1 for example), the AI answer exactly the same with the assistant content that refers to that user content (Only when I specify the context).
Does anyone know how to solve this?