I’m working on a new version of my AI personal assistant and after many hours of tinkering with the prompts, I gave up on 3.5-turbo, as it is just not capable of following instructions properly. GPT-4 handles it quite well, but the cost is really really high for the kind of agent I need.
(it’s funny how 3.5 seems so dumb now )
So the news of fine-tuning the chat models was right on time.
However, the docs state that fine-tuning does not yet support function calling in the samples. What I would like to know, in case anyone already tried it, is if the fine-tuned model will still have the functional calling capabilities of its base model, or if the function calling will be lost completely. I rely heavily on function calling, which also makes GPT-4 super expensive as the descriptions for the functions also count for the token usage and they are so verbose
Is a real bummer if that is the case, but yeah, I get the error if I pass in functions while using my fine tuned model.
I originally interpreted OpenAI’s communication about it as you wouldn’t be able to tune the functions role, but assumed if could pass functions in using gpt-3.5-turbo-0613 then would be able to as well with a model fine tuned on that base.
Wish their documentation was a little bit more clear in that regard but maybe I’m breezing through some of it too quickly.