Not that specific strange name, which seems random happenstance. But some other cases of the AI not writing its tool functions correctly and being confused by the new complexity of parallel tool calls and specifications received.
Selection of token-to-output happens after the AI model generates probabilities of what the next token should be, a process called “sampling” - assigning generated likelihood of good token prediction to likelihood it is selected randomly.
Since there’s no way to know ahead of time if AI will consider using a function, that being its own probability, there’s no easy way of adapting “reliable mode” to AI language writing. The quality of token prediction is by quality of AI model training and architecture, its perplexity.
The API (not assistants) lets you adjust sampling parameters between “boring” and “completely broken”, and you might know temperature. To get rid of unlikely choices, like a 2% chance that a json doesn’t start with { or [, you can use the nucleus sampling parameter top_p with a value such as 0.80, so only the top 80% total probability can be considered.