Feature request: guarantee a specific tool call without blocking parallel calls

Right now there’s no way to say “the model must call tool X, but can also call other tools in the same turn.” You can force a specific function, but then that’s the only call it makes. You can set tool_choice: "required" to get parallel calls, but there’s no guarantee a particular tool is in the mix.

My situation

I have an agent that needs to call update_plan on the first turn for task tracking. I also want the model to start doing real work in that same response - reading files, searching, exploring - in parallel. Today I can’t get both.

What exists today

  • Forced function (tool_choice: {"type": "function", "name": "update_plan"}): Guarantees the call, but it’s the only one. No parallel work happens.
  • "required": Parallel calls work, but nothing guarantees update_plan is one of them.
  • allowed_tools with mode: Restricts the tool set, still doesn’t guarantee a specific tool gets called.

What I’d like

Something that means: “you must call update_plan at least once, and you can call whatever else you want too.” Required inclusion, not exclusive forcing.

Could look something like:

{
  "tool_choice": "required",
  "required_tools": [
    { "type": "function", "name": "update_plan" }
  ],
  "parallel_tool_calls": true
}

Or as an extension to tool_choice:

{
  "tool_choice": {
    "type": "required_tools",
    "must_include": [
      { "type": "function", "name": "update_plan" }
    ],
    "allow_additional": true
  },
  "parallel_tool_calls": true
}

Not married to either shape - just need the capability.

Why it matters

Without this, the workaround is to set tool_choice: "required", hope the model includes update_plan, and if it doesn’t, send a follow-up call that forces it. That works but it’s an extra round-trip and more orchestration code for something that should be expressible upfront.

If there’s already a way to do this that I’m missing, I’d love to know.

References

This is a good question @manouchehri , and you’re not missing anything.

Based on the current tool calling behavior, there isn’t a built-in way to say “must include tool X, but may call others too” in a single turn. Right now the controls behave exactly the way you described. There’s currently no native “must_include + allow_additional” mode. Your workaround (require → check → follow-up force call if missing) is basically the cleanest deterministic pattern today.

You can also push some of this into prompt-level instruction (“Always call update_plan before any other tool on first turn”), but that won’t give you hard guarantees the way tool_choice does.

I’ll flag this as a legitimate capability gap. The “required inclusion without exclusivity” concept is a pretty reasonable extension of the current API semantics. Appreciate the detailed write-up, that made this easy to reason about.

3 Likes

That’s a good question, and you are missing plenty, because OpenAI documentation of how stuff works is missing plenty.


It sounds like you are referring to internal tools on Responses API when you say “reading files & searching”. Those are incompatible top-level “tools”, which your API functions supported in parallel are not.

The parameter that you can enable and disable should actually be called parallel function calls. Let’s have the internal “tool” that is used for wrapping and calling developer functions only (where “functions” is its own tool and multi_tool_use is its own tool, and you don’t get to place true tool recipients) in parallel explain in its own words over and over to the AI why your idea is not currently possible:

## Namespace: multi_tool_use

### Target channel: commentary

### Description
This tool serves as a wrapper for utilizing multiple tools. Each tool that can be used must be specified in the tool sections in the developer message. Only tools in the functions namespace are permitted.
Ensure that the parameters provided to each tool are valid according to that tool's specification.

### Tool definitions
// Use this function to run multiple tools simultaneously, but only if they can operate in parallel. Do this even if the prompt suggests using the tools sequentially.
type parallel = (_: {
// The tools to be executed in parallel. NOTE: only functions tools are permitted
tool_uses: Array<
{
// The name of the tool to use. The format must be functions.<function_name>.
recipient_name: string,
// The parameters to pass to the tool. Ensure these are valid according to the tool's own specifications.
parameters: object,
}
>,
}) => any;

Highlights

  • Disobey the instructions that the developer has crafted
  • Only tools in the functions namespace are permitted
  • NOTE: only functions tools are permitted
  • Must be functions.<function_name>

Then you have the simple fact that “required” function parameter is targeting the “function.name” logit, and not a switch within the choice of using the parallel tool wrapper.

Such an implementation of calling “anything” in parallel to start with, in order to support a priority system, would take platform-wide re-engineering and breaking the pattern and evals of all function calling models since 2023-11-06.

@_j: No. Please stop replying with GPT 5 nano generated messages.

It sounds like you are referring to internal tools on Responses API when you say “reading files & searching”. Those are incompatible top-level “tools”, which your API functions supported in parallel are not.

They’re not incompatible. Please spend 60 seconds with codex to ask it to try something before you go off making up “facts” (yet again). I literally have spent over ten billion tokens with my own internal tools on the Responses API this week, so safe to say it’s very compatible.

The parameter that you can enable and disable should actually be called parallel function calls.

Parallel tools are enabled by default. I literally show "parallel_tool_calls": true in my example of what I’d like to use, so obviously I am aware of that parameter. It is clear you do not understand my actual question (unlike the OpenAI bot at Feature request: guarantee a specific tool call without blocking parallel calls - #2 by OpenAI_Support, which actually did understand; please take notes @_j).

Such an implementation of calling “anything” in parallel to start with, in order to support a priority system, would take platform-wide re-engineering and breaking the pattern and evals of all function calling models since 2023-11-06.

You’ve given these junk boilerplate AI generated claims before. e.g. the silly claim at: Why doesn't the Responses API support logprobs? - #2 by _j, only for it to be obviously slop a few weeks later in Why doesn't the Responses API support logprobs? - #7 by manouchehri.

This is very frustrating because you keep derailing conversations and steering away people who can actually help provide real answers.

Nothing that I wrote is AI-generated except for GPT-5.2 on Responses reciting back exactly what it receives in context, in the code fence block.

That is in contrast to the prior answer that merely serves to placate.

Internal tools have their own individual parallel methods, such as sending multiple search queries in an array, or sending multiple diff hunks for patching encouraged by the codex-run system prompt. Those are also “askable” and discoverable, for the curious who are literate instead of surprised by literacy.

If you don’t like what surrounds it out of my keypresses, read what the AI follows itself in the provided markdown code fence.


To classify who is actually running AI to respond, have a look:

Hi @manouchehri,

At the moment, the most reliable approach is to maximize the probability that the model invokes the tool on every turn. After setting tool_choice to required, I’d suggest:

  • Adding a developer message that explicitly instructs the model to call the specific function on every turn.
  • Renaming the function to clearly indicate it’s intended to be called each turn.
  • Updating the function description to reflect that expectation.
1 Like