Model tries to call unknown function multi_tool_use.parallel

Hi, testing out an assistant in a playground it tried to call a function called “multi_tool_use.parallel” like:

      "required_action": {
        "type": "submit_tool_outputs",
        "submit_tool_outputs": {
          "tool_calls": [
              "id": "call_0obfNi7th7uJiMp6pHUkkh8Z",
              "type": "function",
              "function": {
                "name": "multi_tool_use.parallel",
                "arguments": "{\n  \"tool_uses\": [...]\n}"

This seems like it contradicts the documentation that multiple parallel tools should be called as a array of [tool_calls], and I can’t find any documentation on this function. This seems unexpected behavior at least!


Could be an internal tool module setup by OpenAI team @kraemahz

perhaps an issue with the library? tried upgrading to the latest version?

The call happened while using the playground with an assistant, so this is all on OAI’s side.

hey, @kraemahz did you sendup finding what could be causing that ? I want to run multiple function but it seems that I get the multi_tool_use.parallel error

@kraemahz got the same output, looks like occuring when the models function calling thinks he needs to run multiple tools, probably something internal yep

How did you solve this problem? I got this function too. But I just decal one method in function call.

Just debugging a code here and this appeared:

Seams to be related to

1 Like

Welcome to the community @Edwin5 , hope you’ve been able to resolve your issue.

Welcome to the community @caiopetrellicominato, hope you’ve been able to resolve your issue.

1 Like

Hitting this too. Is this by-design and should be supported by host applications?

I ran into the same today though in my case the function is named parallel without the leading multi_tool_use. No resolution at the moment.

I keep getting the same - it must be a training issue mustn’t it? I’ve been changing the instructions a lot and still getting this issue

I’ve also been facing this issue every now and then - in my experience the calls would follow a predictable structure that could be reliably hotfixed. I’m guessing OpenAI will fix this eventually, but in the meantime I built a quick patch for this that’s easy to drop into a project to make the problem go away:


Usage is as follows:
pip install openai_multi_tool_use_parallel_patch

In your code:

import openai_multi_tool_use_parallel_patch  # import applies the patch
import openai

client = openai.AsyncOpenAI(...)  # sync client will be patched too


response = await  # no changes to the call signature or response vs vanilla OpenAI client

FYI: Patch uses removeprefix(), added in Python 3.9, but PyPi metadata says it’s compatible w/ 3.8.

'str' object has no attribute 'removeprefix'
1 Like

Yea all good thanks, just updated the openai package to it’s most recent version and implemented as the doc suggested works fine.

@atwoodjw_agh Yeah my bad. It also turns out I published the patch with a typo and it would not have worked anyway - it’s fixed now and I just re-released to PyPI. I also tested with python 3.7, things should work okay now. Unfortunately I cannot actually reproduce with a real OpenAI API call at the moment, but if these glitchy responses still gets generated in the same format, the patch should work.

1 Like