I’m currently in the process of fine-tuning a GPT 3.5 model to create a conversational chatbot. I have 46 distinct conversations, each representing different scenarios between the user and the assistant, all within the same domain. I was wondering if you could kindly advise me on the minimum amount of conversation data necessary to achieve optimal model performance. Your insights would be greatly appreciated.
In my experience, finetuning only really works in adjusting tone and “character”.
If you want your chatbot to use domain knowledge in its answers, using a RAG (retrieval augmented generation) system performs much, much better, in addition to being considerably more transparent.
in that case, you can also get away with much fewer examples.
hope this helps.
Hey champ! And welcome to the developer community forum
46 examples should be enough to create a very decent fine tune!
You can achieve different things with fine tuning, if you expand a bit on what you mean by “optimal performance” then we might be able to help you a bit better
In certain instances, when the conversation is a new scenario , the results tend to deteriorate and prompt restrictions are not working every time. Sometimes model also return text such as “supported”, “[names provided]”, during a conversation. So i thought more conversation data might improve the model.