Possible to force calling multiple functions in parallel?

I am trying to leverage the new parallel function calling by forcing two functions to be called in one request.

However, the tool_choice param only seems to accept one function name at a time.

tool_choice={
  "type": "function",
  "function": { "name": "function1" },
}

Does that mean that only the same function can be run multiples times in parallel as in the provided example? Or it is possibly somehow to force it to call two different functions?

2 Likes

Only way I found so far was using tool_choice="auto" and asking in prompt to execute both functions.

You just leave it auto (default when there are tools) and let it decide. Are you trying to get it to call the same function, or different functions? From my tests it will call the same function in parallel but not different functions. To get it to call another function you have to pass the results of the first back in.

2 different functions at once in parallel. (Example: function_1 and function_2)

Setting auto works but may be unreliable. Ideally I would like to force the function calls:

tool_choice={
  "type": "function",
  "function": [ { "name": "function_1" }, { "name": "function_2" } ],
}
8 Likes

Any update? I’m trying to do the same thing, but have found the model can become quite “lazy” and inconsistent with it’s output.

E.g. sometimes it will carry out all the desired tool calls, and sometimes only one. Furthermore, I’ve found that the model often produces less content when it’s asked to carry out functions in parallel than when it’s asked to perform the calls individually, almost as if it’s trying to cut down on output tokens due to it having to produce more content at the one time.

As far as I’m aware, there’s nothing you can do with the tool calls param to resolve this.

Although the results have been better when using more advanced models however, and being very clear that it needs to call multiple functions, I’ve found the best solution for me has been to simply pass previous function calls in as context, and call the model for each subsequent function call with that context.

Perhaps finetuning could be the solution…

Let me know if you’ve found a solution for this though, as the quicker production times and reduced input tokens would definitely be welcome!