Function calling in gpt-3.5-turbo-instruct

I notice that the Completion API does not support the functions argument as ChatCompletion does.

Was the gpt-3.5-turbo-instruct not trained with this capability or is the API not updated?

Hi and welcome to the Developer Forum!

I don’t know if function calling with instruct models will be implemented, both are types of tunes of existing models and that could prove to be a challenge to get functional.

That’s an interesting question.

Here’s an interesting fact: You get a differently-trained AI when you include functions in your API call. Injecting the exact same language of a function into the normal model gives inaction.

Function-calling is implemented by the message format that is created by the messages you pass, and the json is written for the AI in a special way.

Let’s treat gpt-3.5-turbo-instruct as a chat model and try to make it call for the weather forecast:

Results: It has no idea what to do with a function definition. It will hallucinate weather for you.

It does handle the containers of “chat” if not by fine-tune, by its own abilities for understanding. A bit more investigation would be required to see if the special token sequences are encoded by the completion endpoint encoder – and answer: they are not, they are just regular text tokens.

The older completion endpoint, I have no expectation it could recognize and output function_call even if you made the AI produce the function.

So if you want functions, you’ll have to do more work of prompt-programming the AI to produce the language you want conditionally and detect it.

Then instead of giving the result back to an AI with the same prompt, you’ll likely want to invoke one with a different programming that is trained on how to use the new information.

Sounds like a lot of work.