Giving a value to "tool_choice" no longer allows functions to be called in parallel

Hi,

With the GPT 3.5 API, model 0125, I need to call several functions in parallel. It works fine as long as “tool_choice” is set to “auto”, but sometimes it doesn’t call a function and generates a “text” response instead (even using the “response_format”: {“type”: “json_object”}, it simply wraps its text in a JSON).

To remedy this problem I thought of simply forcing the call of a function by using for “tool_choice” a value like: {“type”: “function”, “function”: {“name”: “weatherForecast” }}, but the problem is that now it no longer calls the functions in parallel but only makes a single call.

The 1106 model doesn’t have any “hallucinations” about calling a function or responding with a text message, but it has other problems like the accent bug in languages other than English and a understanding of prompts sometimes less good than the 0125. And especially we do not know if this model will be deprecated in the coming months or not, there is no information on this subject and GPT visibly evolves its models very quickly, therefore I think 1106 is not reliable for a production environment.

In short, if you have a tip for preserving the function call in parallel while “forcing” a specific function, I’m interested. My prompt system contains an instruction indicating that the same function must be called several times to compare or retrieve data on two different locations and it works as long as the tool_choice is set to auto…

Thank you ! And good luck in your dev. Sometimes I feel like I’m repainting the same wall every morning because its color has changed overnight… Not easy to accept when you’re used to coding in a classic and strict way.

1 Like