The ability to call multiple user tools in parallel is either there – or not.
Parallel tool use also has had complications with “strict” structured output functions.
How does it work? A special tool is placed, alongside the “functions” tool: a recipient for processing those parallel function calls.
This is how it looks to the AI on gpt-4.1-mini on the Chat Completions endpoint:
This is the availability of such to the o4-mini API model:
No wrapper for making parallel function calls. Thus multiple iterations to your API backend and input billing for each context use for individual function calls on Chat Completions (but a token savings by not having that additional tool always there).
You can see above I had to fill in two separate function calls to get my “parallel” use needs.
That same prompt with o4-mini on Responses put the internal iterator into a loop, returning a single function call only after 3354 output tokens. Then failure returning the single function’s value - because apparently the Responses Playground is trying to now resend and rebloat context with a prior reasoning message ID in the messages list with tool return also, needing the same state storage as reusing a response ID - prior response ID which it doesn’t do.
If one is anointed from on high, you mean
Are you handing those out without any obligation to check a moderator inbox?


