I have an assistant with a single function. Yet the Assistants API returns multiple tool_calls and expects results from all of them
I noticed the errors because my backend does a POST to …/submit_tool_outputs with the result for the function call I defined, however that throws an error, because the API expects multiple.
Upon further inspection, it seems the API often expects tool_calls for names including:
search, calculate or open_url
This doesn’t really make sense and is causing problems, I assume those are tools that OpenAI should be handling.
Can you post code snippets of your api calls and the prompts behind them? any function definitions you have as well please, also could you include the logs from the posts? As much data as you can to help us to help you.
The issue seems to be that the submit_tool_outputs endpoint expects outputs from other “tools”, which I guess are internal to OpenAI e.g. I have seen “calculate” and “search”
hey how did you fix this? I have several tools it can make within a run, and its using Chainlit async. It’s always getting multiple and different call IDs when submitting tool outputs…
One tool function may be called several times, check the tool_calls to see if the behavior is reasonable.
I encountered this error message before, but in my case ai behavior is just fine, because user ask ai to provide 3 suggestions, and it call my function 3 times at once with three different parameter in tool_calls. For such case, I need call my function three times and submit three results so ai can provide three suggestion results at once.
yes but, some functions can take longer for example model can exec two functions in tool_calls (first: 15 mins, second: 1 second). In this case You need wait 15 mins for all, while you should know about finish one just after second.