Parallel tool calling is a wrapper - an additional tool that is placed called multi_tool_use
, within which function names of the developer can be placed.
Note the difference:
- tool: OpenAI internal tools
- function: a list of developer functions all placed in one type of tool called “functions”
The primary way you can see if it is working, to see if the additional tool is being placed into AI context and is language offered to the AI (which often cannot understand or properly use the tool description that comes along with multi_tool_use) is to look at the input token usage billed, first with the difference of manually setting the parallel tool parameter for the endpoint true or false, then with switching the AI model - see if the input is increased and reduced or errored or silently fails to deliver..
The unaddressed concern previously demonstrated: