I noticed reasoning models have trouble calling functions in parallel. Is this expected?
I see in the OAI blog
Some OpenAI models support the parameter parallel_tool_calls which allows the model to return an array of functions which we can then execute in parallel. However, reasoning models may produce a sequence of function calls that must be made in series, particularly as some steps may depend on the results of previous ones.
However, I’m surprised that in scenarios where there is no obvious dependency between steps reasoning models do not parallelize calls (in the runs I’ve conducted).
You can work around this by presenting the AI with a function that can output an array of queries, then describing and encouraging why multiple varied outputs are useful in the function.
Functions are one type of “tool”. All the different functions you provide are sent to the function tool.
The parallel tool call is implemented by a separate tool, a wrapper the AI is told to use for functions of a different tool. This has resulted in lots of bad output, the AI sending functions in the wrapper’s described format, using the wrapper’s name multi_tool_use in function calls, etc. That, besides this extra tool language costing you more in every API call.
I don’t see it as a loss, especially due to the malfunctions in understanding the poor pattern. A positive result for the user may be returned from just one function use, or then the AI can persist with different strategy instead of blind invocation of many calls that fill the context with return results.
I still wonder why the reasoning models don’t query tools in parallel even when I have a seemingly benign function (print_hello) and ask for explicitly parallel queries.
If the parallel wrapper tool is never offered to the AI in context, it cannot use it. The wrapper is also turned off by the boolean API parameter parallel_tool_calls that is thankfully now given to developers.
For reference, this is what the AI is following in this prompted internal application:
## multi_tool_use
// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {
// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;
} // namespace multi_tool_use
The decision to not give it to reasoning models must be answered from that decision-maker at OpenAI.