Model tries to call unknown function multi_tool_use.parallel

Is there some reason i’m missing why it would be undesirable for the API surface to have guardrails that filter out or retry when it’s about to return hallucinated function calls or tool uses that were not specified in the request?

2 Likes

Has anyone been able to find a way to get the model to reliably produce this hallucination? I’ve noticed it does it sometimes in my case but I am trying to get it to do it now so I can test if my code handles it correctly. Thanks

1 Like

It suddenly did all the time two days ago. But didn’t see it at all yesterday and never before either.

2 Likes

I got this error today. I think it’s related to Parallel Function Calling that was enabled on certain models recently.

3 Likes

Bump. Do we have a fix for this?

2 Likes

I also suddenly got a huge bunch of these hallucinated functions very recently from yesterday or something…
What is a fix?

1 Like

I don’t think there is a fix as it is a bug the OpenAI side. I had to add extra error handling to handle these hallucinated functions. Basically you have to abandon the run which sucks.

1 Like

Got this same issue today, so it seems the bug is still around. It must be something internal, as I have seen it successfully call parallel functions by sending an array of tool_calls before. When I send a message where the appropriate response would be to call multiple functions in parallel, sometimes it does it correctly (array of tool_calls) and sometimes there is a single item in the tool_calls array with the following structure:

{
  ID: "call_123",
  Name: "multi_tool_use.parallel",
  Arguments: [
    {
      Name: "tool_uses",
      Value: [
        {
          parameters: {
            // my function parameters
          },
          recipient_name: "functions.myFunctionName"
        },
        {
          parameters: {
            // my function parameters
          },
          recipient_name: "functions.myFunctionName"
        }
      ]
    }
  ]
}

Hope this bug can be found and fixed!

3 Likes

It is because the AI is poorly using this wrapper implemented by language, which was designed for a more capable AI.

## multi_tool_use

// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {

// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;

} // namespace multi_tool_use

You also will not have good luck affecting this by messing with logit_bias for “multi”, because once the AI has enough tokens to rewrite token run sequences with its internal “JSON mode”, all your potential corrective actions are completely blocked by OpenAI.

Solution: Find every single person responsible for every single “feature” announced at devday, and make sure they don’t work in AI. Start at the top again.

1 Like

It seems to be happening quite often today…

I am also running into this issue today. Sometimes I get the parallel tool calls properly and sometimes I get an error because of this missing multi_tool_use.parallel function.

I’m also hitting this. Just saw it for the first time today.

Indeed, the final solution is just to merge these calls into one output sharing the same call id.

Example in ts :

const function_name = call.function.name || call.function;
  try {
    // 2023-11-12 undocumented feature, to be removed when the API is stable
    if (function_name === "multi_tool_use.parallel") {
      const uses: any[] = args["tool_uses"];
      const uses_results: any[] = [];
      for await (let use of uses) {
        const recipient = use.recipient_name;
        const splitted_function_name = recipient.split(".")[1];
        const result = await methods[splitted_function_name](use.parameters);
        uses_results.push(result);
      }
      tool_outputs.push({
        tool_call_id: call.id,
        output: uses_results.join("\n"),
      });
    } else {
      const result = await methods[function_name](args);
      tool_outputs.push({
        tool_call_id: call.id,
        output: result,
      });
    }
1 Like

Hi guys. I’ve been facing this same situation, specially after switching my assistant to GPT-4o.

I’ve solved it by using the following prompt:

“You shall only invoke the following defined functions: //list of defined functions. **You should NEVER invent or use functions NOT defined or NOT listed HERE, especially the multi_tool_use.parallel function. If you need to call multiple functions, you will call them one at a time **.”

It’s now calling functions one by one as expected, although it’s a bit odd since the parallel function calling is a cool feature. However, as I suspect it’s a bug on the new GPT model, this can be a temporary workaround while it’s solved by OpenAI.

2 Likes

Are folks seeing this in both Chat Completions and Assistants API? Can anyone share an example to repro?

1 Like

It would be a lot easier to diagnose and evaluate the chance that this tool is emitted as a function name instead of used properly if all logprobs weren’t blocked to developers after the AI produces an initial un-reweightable tool token…

I’m using a more elegant solution. As we can control the entire chat history, we can make GPT believe it called separate functions instead of ‘multi_tool_use’. So I edit GPT’s requests before adding them to the chat history (I’m writing in C#):

        if (msg.ToolCalls.Where(t => t is ToolCallFunctions f && f.Function!.Name == "multi_tool_use.parallel").Any())
        {
            // Edit received tool calls so request will look like a sequence of function calls
#if DEBUG
            Trace.WriteLine("multi_tool_use.parallel detected"); // for debug
#endif
            var newToolCalls = new List<IToolCall>();
            foreach (IToolCall tool in msg.ToolCalls)
            {
                if (tool is ToolCallFunctions f && f.Function!.Name == "multi_tool_use.parallel")
                {
                    MultiToolParallel? multitools = JsonSerializer.Deserialize<MultiToolParallel>(f.Function!.Arguments!);
                    if (multitools == null)
                        throw new InvalidDataException("multi_tool_use.parallel arguments are null");
                    foreach (MultiToolElement subtool in multitools.ToolUses)
                    {
                        var functionName = subtool.RecipientName;
                        if (functionName.StartsWith("functions."))
                            functionName = functionName[10..];
                        newToolCalls.Add(new ToolCallFunctions(
                            Guid.NewGuid().ToString(), // random id
                            new FunctionCall(
                                functionName,
                                subtool.Parameters.ToString(),
                                null
                                )
                            )
                        );
                    }
                }
                else
                {
                    newToolCalls.Add(tool);
                }
            }

            msg.ToolCalls = newToolCalls;
        }

Can confirm this issue is significantly impacting the chat completions API as well - it’s triggering while using a third party tool library. Thought it was a server stability issue on my end until I thought to just ask it what tools it thought it could access.

(Follow-up post with the documentation provided by the completions API - this was a fresh conversation, with no leading messages beyond what you saw in my first post. It directly mirrors the wrapper posted above.)

[UPDATE]
A further, rather interesting, update after some debugging that both solved the bug for me, and unlocked parallel function calls. I do not know if this will work for people using first party function calling, but at the very least I can confirm is working reliably on a Langgraph backend paired with Chat Completions API.

I am now successfully getting consistent parallel function calls via a system prompt instruction to prefix the name of any function being called with ‘functions.’ - for example, ‘functions.[function_name_here]’. After a few rounds of debugging it became more clear that the failure was coming from multi-function calls failing to have that prefix before the name of the functions within.

Equally interestingly, after that was addressed, parallel calls succeeded when I specified both to explicitly include, and exclude, the multi_tool_use prefix. I can only speculate on why - best guess is that OpenAI is already filtering its inclusion on their end?

Some examples.
This worked:

{
  "tool_uses": [
    {
      "recipient_name": "functions.specialised_knowledge",
      "parameters": {
        "query": "Query1"
      }
    },
    {
      "recipient_name": "functions.specialised_knowledge",
      "parameters": {
        "query": "Query2"
      }
    }
  ]
}

This worked:

{
  "recipient_name": "multi_tool_use.parallel",
  "parameters": {
    "tool_uses": [
      {
        "recipient_name": "functions.specialised_knowledge",
        "parameters": {
          "query": "Query1"
        }
      },
      {
        "recipient_name": "functions.specialised_knowledge",
        "parameters": {
          "query": "Query2"
      }
    }
  ]
}
}

This did not work:

{
  "tool_uses": [
    {
      "recipient_name": "specialised_knowledge",
      "parameters": {
        "query": "Query1"
      }
    },
    {
      "recipient_name": "specialised_knowledge",
      "parameters": {
        "query": "Query2"
      }
    }
  ]
}
1 Like