New API feature: forcing function calling via `tool_choice: "required"`

We’ve added the ability to force the model to call one of your provided functions (via tools) in the Chat Completions and Assistants API by setting tool_choice: "required".

When you set tool_choice: "required", the model will determine which function(s) are relevant and call them.

For more details, reference our guide and API reference.

Give it a try and let us know your feedback!



It seems OpenAI keeps the special tokens and sequences the AI writes out of the scope of logit_bias, including those initial generated tokens which are signaling tool calls, preventing flexibility and even workarounds of AI model training issues.

Even then, to the point of providing an alternate version of logprobs and just shutting logprobs off on function call, as if token numbers and their encoding strings were some secret genius plan.

Feedback: Especially on chat completions, tool_bias as an abstraction performed on the first tokens could let one control the entry into a tool-generation sequence with more nuanced hinting to the AI softmax. Essentially piggybacking on how certain the AI is of invoking a tool as a response to increase or discourage that certainty.


Thanks for this update!

  1. Is there a dedicated way to follow on this kind of minor API updates in the future? (I care about this stuff, and almost missed this post)
  2. Will this have an effect on whether or not message.content is returned along with message.tool_calls? (guessing not but I guess worth asking, as a request for tool_calls only is imho a valid use case)

Thank you @brianz-oai , this is really useful. I’m wondering for option 2:

To force the model to call only one specific function

Is there a way to force calling multiple functions?

1 Like

Hi Ram,

Thanks for your questions.

  1. There is a changelog page you can follow for API feature releases like this!
  2. I assume you are referring to the behavior where under the default tool_choice: "auto" setting, the model sometimes outputs a message.content along side message.tool_calls. When you set tool_choice: "required" or to a specific tool, the model won’t return any message.content in the current implementation.




Unfortunately we don’t provide such an option in our API at the moment.

As a workaround, since all of our newer models supports parallel function calling, when you provided multiple tools and set tool_choice: "required", the model can generally do a good job picking multiple relevant tools as long as you have a good prompt.

Let me know if you have further questions.


1 Like

Thanks Brian!

Regarding 2: yes, that’s what I meant.
I have another followup questions if that’s ok:
With required- does the model return an empty tool_calls in case nothing fits, or will it return anything that it thinks fits best?

1 Like

Under tool_choice: "required", the model will be forced to pick the most relevant one to call, even if none really fits the prompt. Returning an empty tool_calls when there isn’t an appropriate tool to call is what the default tool_choice: "auto" is for.


Perhaps add to documentation how many steps this is imposed of the AI.

An assistant that could not output anything but tools would have no mechanism to respond to the user, and an “out” like a “final_answer” function could not be used in assistants without abandoning the thread.

Likewise, a tool combo requiring iterations like a “search” and then a “load” would not have the same mandate on followups if it was just the initial turn, defeating expectations.

1 Like

I’ve found this works poorly with the Assistants API when I am trying to get it to answer a prompt in relation to a file and then perform a function call @brianz-oai .

If I don’t have tool_choice={"type": "file_search"} then it seems to not want to call my function if the file search doesn’t return any results (even though in this case I have prompted it explicitly to call my function in a specific manner)

If I call it with tool_choice={"type": "function", "function": {"name": "my_fn"}}, it neglects to do a file search.

If I call it with tool_choice="required" its a crap shoot!

I guess I have to rely on validating the result and retrying if it doesn’t match, but it would be nice if function tools and internal tools were handled separately here.

I’m noticing that when setting "tool_choice": "required" that finish_reason is stop and not tool_calls. This seems wrong?


I’m not super familiar with Assistants API best practices but it sounded like in your case, one solution would be to first trigger a run for the file_search tool use (without enabling the function tool), and then a separate run that requires tool_choice while enabling function tool?

Yes, that is the expected behavior for tool_choice: "required", though I agree it’s somewhat confusing.

To provide a bit more context, before we introduced this new feature, when you set tool_choice: {"type": "function", "function": {"name": "my_function"}}, the finish_reason would always be stop rather than tool_calls. Only when you used the default tool_choice: "auto" option, and the model chose to use a tool, the finish_reason would be tool_use. So when we designed this new feature, we thought it made more sense to provide the consistent behavior as tool_choice: {"type": "function", "function": {"name": "my_function"}} as the two are more similar (i.e. model is forced to use a tool).

Fixing this now could potentially break some users’ integration, but we will almost certainly fix this when we release the next API version.


Sounds good! Thanks for clarifying. This isn’t an issue and I understand where you’re coming from. I’ve made updates to handle this, and agree that it’s better to maintain existing integrations with your API.

Thank you!

1 Like

@brianz-oai Do you happen to know when will tool_choice: “required” be available in azure openai?

I don’t have visibility into or knowledge on that, but I’d guess it won’t be long.