Thanks.
It seems OpenAI keeps the special tokens and sequences the AI writes out of the scope of logit_bias, including those initial generated tokens which are signaling tool calls, preventing flexibility and even workarounds of AI model training issues.
Even then, to the point of providing an alternate version of logprobs and just shutting logprobs off on function call, as if token numbers and their encoding strings were some secret genius plan.
Feedback: Especially on chat completions, tool_bias
as an abstraction performed on the first tokens could let one control the entry into a tool-generation sequence with more nuanced hinting to the AI softmax. Essentially piggybacking on how certain the AI is of invoking a tool as a response to increase or discourage that certainty.