Model tries to call unknown function multi_tool_use.parallel

Is there some reason i’m missing why it would be undesirable for the API surface to have guardrails that filter out or retry when it’s about to return hallucinated function calls or tool uses that were not specified in the request?

2 Likes

Has anyone been able to find a way to get the model to reliably produce this hallucination? I’ve noticed it does it sometimes in my case but I am trying to get it to do it now so I can test if my code handles it correctly. Thanks

1 Like

It suddenly did all the time two days ago. But didn’t see it at all yesterday and never before either.

2 Likes

I got this error today. I think it’s related to Parallel Function Calling that was enabled on certain models recently.

3 Likes

Bump. Do we have a fix for this?

2 Likes

I also suddenly got a huge bunch of these hallucinated functions very recently from yesterday or something…
What is a fix?

1 Like

I don’t think there is a fix as it is a bug the OpenAI side. I had to add extra error handling to handle these hallucinated functions. Basically you have to abandon the run which sucks.

1 Like

Got this same issue today, so it seems the bug is still around. It must be something internal, as I have seen it successfully call parallel functions by sending an array of tool_calls before. When I send a message where the appropriate response would be to call multiple functions in parallel, sometimes it does it correctly (array of tool_calls) and sometimes there is a single item in the tool_calls array with the following structure:

{
  ID: "call_123",
  Name: "multi_tool_use.parallel",
  Arguments: [
    {
      Name: "tool_uses",
      Value: [
        {
          parameters: {
            // my function parameters
          },
          recipient_name: "functions.myFunctionName"
        },
        {
          parameters: {
            // my function parameters
          },
          recipient_name: "functions.myFunctionName"
        }
      ]
    }
  ]
}

Hope this bug can be found and fixed!

2 Likes

It is because the AI is poorly using this wrapper implemented by language, which was designed for a more capable AI.

## multi_tool_use

// This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections. Only tools in the functions namespace are permitted.
// Ensure that the parameters provided to each tool are valid according to that tool's specification.
namespace multi_tool_use {

// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: {
// The name of the tool to use. The format should either be just the name of the tool, or in the format namespace.function_name for plugin and function tools.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}[],
}) => any;

} // namespace multi_tool_use

You also will not have good luck affecting this by messing with logit_bias for “multi”, because once the AI has enough tokens to rewrite token run sequences with its internal “JSON mode”, all your potential corrective actions are completely blocked by OpenAI.

Solution: Find every single person responsible for every single “feature” announced at devday, and make sure they don’t work in AI. Start at the top again.

1 Like

It seems to be happening quite often today…

I am also running into this issue today. Sometimes I get the parallel tool calls properly and sometimes I get an error because of this missing multi_tool_use.parallel function.

You can handle these by deserializing the arguments. Taking in the namespace possibly being prefixed.

Then give the tool calls the same ID. This should transform them into the same structure.

At the end you can loop through these tools by ID, combining the results into one.

So far this has worked for me.

I would completely ignore how it’s delivered and manage the sequences yourself. I’ll post some code shortly, just on my phone. Mine just naively ignores any duplicates, which probably isn’t the best but in my case I am just sending an “OK”.

I’ve also always had the function name being prefixed. So. Not the best code but it dang works and as you can see I just rushed to get it through :rofl:

SOrry for the delay, here’s the code:

if function.name == "multi_tool_use.parallel" {
            // Caught
            // We need to deseralize the arguments
            let fuckin_openai = serde_json::from_str::<FuckGPTLOL>(&function.arguments).unwrap();
            let tool_uses = fuckin_openai.tool_uses;

            for tool_use in tool_uses {
                let tool = ToolCall {
                    id: tool.id.clone(),
                    call_type: tool.call_type.clone(),
                    function: Function {
                        name: tool_use
                            .recipient_name
                            .clone()
                            .rsplit('.')
                            .next()
                            .unwrap()
                            .to_string(),
                        arguments: serde_json::to_string(&tool_use.parameters).unwrap(),
                    },
                };
                tools_requested.push(tool);

            }
        }
fn remove_duplicate_outputs(
    outputs: Vec<SubmitToolOutput>,
) -> Result<Vec<SubmitToolOutput>, OpenAIError> {
    let mut unique_tool_outputs: Vec<SubmitToolOutput> = Vec::new();

    for tool_output in outputs {
        if !unique_tool_outputs
            .iter()
            .any(|output| output.tool_call_id == tool_output.tool_call_id)
        {
            unique_tool_outputs.push(tool_output);
        }
    }

    Ok(unique_tool_outputs)
}

I asked GPT to translate into Python for the above code block:

import json

if function.name == "multi_tool_use.parallel":
    # Caught
    # We need to deserialize the arguments
    fuckin_openai = json.loads(function.arguments)
    tool_uses = fuckin_openai['tool_uses']

    for tool_use in tool_uses:
        tool = {
            'id': tool_use['id'],
            'call_type': tool_use['call_type'],
            'function': {
                'name': tool_use['recipient_name'].rsplit('.', 1)[-1],
                'arguments': json.dumps(tool_use['parameters']),
            }
        }
        tools_requested.append(tool)

I’m also hitting this. Just saw it for the first time today.

Indeed, the final solution is just to merge these calls into one output sharing the same call id.

Example in ts :

const function_name = call.function.name || call.function;
  try {
    // 2023-11-12 undocumented feature, to be removed when the API is stable
    if (function_name === "multi_tool_use.parallel") {
      const uses: any[] = args["tool_uses"];
      const uses_results: any[] = [];
      for await (let use of uses) {
        const recipient = use.recipient_name;
        const splitted_function_name = recipient.split(".")[1];
        const result = await methods[splitted_function_name](use.parameters);
        uses_results.push(result);
      }
      tool_outputs.push({
        tool_call_id: call.id,
        output: uses_results.join("\n"),
      });
    } else {
      const result = await methods[function_name](args);
      tool_outputs.push({
        tool_call_id: call.id,
        output: result,
      });
    }
1 Like

Hi guys. I’ve been facing this same situation, specially after switching my assistant to GPT-4o.

I’ve solved it by using the following prompt:

“You shall only invoke the following defined functions: //list of defined functions. **You should NEVER invent or use functions NOT defined or NOT listed HERE, especially the multi_tool_use.parallel function. If you need to call multiple functions, you will call them one at a time **.”

It’s now calling functions one by one as expected, although it’s a bit odd since the parallel function calling is a cool feature. However, as I suspect it’s a bug on the new GPT model, this can be a temporary workaround while it’s solved by OpenAI.

1 Like