Assistants API expects multiple tool_calls, even though only one function provided

I have an assistant with a single function. Yet the Assistants API returns multiple tool_calls and expects results from all of them

I noticed the errors because my backend does a POST to …/submit_tool_outputs with the result for the function call I defined, however that throws an error, because the API expects multiple.

Upon further inspection, it seems the API often expects tool_calls for names including:

search, calculate or open_url

This doesn’t really make sense and is causing problems, I assume those are tools that OpenAI should be handling.


Can you post code snippets of your api calls and the prompts behind them? any function definitions you have as well please, also could you include the logs from the posts? As much data as you can to help us to help you.

Here is an example error call:


    "tool_outputs": [
        "tool_call_id": "call_0ORM2unDOuMOpNyEdF1Plmk7",
        "output": "example output redacted"


  "error": {
    "message": "Expected tool outputs for call_ids ['call_0ORM2unDOuMOpNyEdF1Plmk7', 'call_kdPD6iLYf9Sx1wla6BwTbqYe'], got ['call_0ORM2unDOuMOpNyEdF1Plmk7']",
    "type": "invalid_request_error",
    "param": null,
    "code": null

As I mentioned, I only have a single function defined, here is a slightly redacted version of it. The parameter names are the same:

  "name": "example_name",
  "description": "Example description.",
  "parameters": {
    "type": "object",
    "properties": {
      "reason": {
        "type": "string",
        "description": "Example description."
      "urgency": {
        "type": "string",
        "description": "Example description."
    "required": [

The issue seems to be that the submit_tool_outputs endpoint expects outputs from other “tools”, which I guess are internal to OpenAI e.g. I have seen “calculate” and “search”

Fixed the problem!

while True:
    run_status = client.beta.threads.runs.retrieve(,

    if run_status.status == 'completed':

    elif run_status.status == 'requires_action':
        tools_outputs = []

        for tool_call in run_status.required_action.submit_tool_outputs.tool_calls:
            if == "google_search":
                arguments = json.loads(tool_call.function.arguments)

                print(f"Running tool - '{}' | With args - {arguments}")
                output = google_search(arguments["query"])
                print(len(output), output)

                tool_output = {"tool_call_id", "output": json.dumps(output)}

        if run_status.required_action.type == 'submit_tool_outputs':
            print("Submit output")
            client.beta.threads.runs.submit_tool_outputs(,, tool_outputs=tools_outputs)


thanks for the help here. It was not exactly what I needed but gave me the clue to get the my setup working with multiple concurrent tool calls.

1 Like

hey how did you fix this? I have several tools it can make within a run, and its using Chainlit async. It’s always getting multiple and different call IDs when submitting tool outputs…

One tool function may be called several times, check the tool_calls to see if the behavior is reasonable.

I encountered this error message before, but in my case ai behavior is just fine, because user ask ai to provide 3 suggestions, and it call my function 3 times at once with three different parameter in tool_calls. For such case, I need call my function three times and submit three results so ai can provide three suggestion results at once.

1 Like

yes but, some functions can take longer for example model can exec two functions in tool_calls (first: 15 mins, second: 1 second). In this case You need wait 15 mins for all, while you should know about finish one just after second.

Yeah, I experienced this too myself recently. We fixed this by just using an array and returning the response with the appropriate tool_id