Function doesn't work with finetuned Model

Hello
I have a fine-tuned model gpt3-turbo that has been using the functions for a few days now. I can no longer use the functions with them (on the other hand if I switch to the gpt-3.5-turbo or gpt-4 standard model) they work correctly again.

There are new changes regarding the use of functions with trained models

thnaks

Support for fine-tuning with function calling and gpt-3.5-turbo-16k will be coming later this fall.

The model that you train does not have the pre-training for understanding how to call functions, nor is it put behind an endpoint that will place the instructions in the model’s context.

Thank you
so the best solution is to use the gpt3-turbo model to detect which function it is necessary to use then the reformulation of the response is done by the finetuned model.

Unless you have a special-purpose AI, it won’t invoke a function unless it is necessary for fulfilling a user request. So if gpt-3.5-turbo untrained with functions is your front line, it will just respond normally to the user if no function is invoked.

I suppose you could send the user input to both standard and fine-tune, and use a function if one is called by turbo, and if not, answer the user with the fine-tune, but that seems like quite the hack job.

Thank you for your answer, here is what I tried and it works perfectly
THE user question is send to the GPT_3-turbo
with instruction like :
$instruction_primary = “If the question concerns X, direct to ‘X function’. Otherwise, direct to the appropriate function. Redirect only, do not answer directly.”;

Were X fonction is the the documentation on my ERP using the finetuned model

then in the functions I repeat the call to the pre-trained model with the correct instruction

Fine tuning for function calling landed today, you can now see the documentation here

1 Like

Interesting, OpenAI again wrote another layer for obfuscating the assistant AI generative language of calling a function in a fine-tune file, so your training might be on the normal function-call language behavior they already gave gpt-3.5-turbo, but again, you don’t get to see how to fill the assistant conversation history with the actual AI language, which can end up training it on calling functions wrong.

Then you have the variable in play, that you must be using the same functions in practice as in training to get maximum isolation of your model from standard chat behavior.

“Can train it on calling functions even when they aren’t present”… Well when you specify a function, are they injecting it into the system prompt anyway to ruin that goal? Instead, you should show the function called by AI without the specification being present, I’d think. And that needs the training file parser to let function_call responses fly without the function definition.

and: do you even get a function call supporting endpoint if you don’t include one with your API call?

The example also says "with each line including a list of “messages” and an optional list of “functions”, but then the code block shows the incompatible multi-line demonstration.

Thank you for the update ! To be sure the 3.5 16k models not yet works with fine tuning, right ?

Correct, the 16k model is not yet fine-tuneable.

1 Like