Logits_bias no longer fully working

You may be working in the world of extreme certainty. Here, one I constructed for gpt-3.5-turbo-instruct:

image

How about then I throw +/- 0.01% into the logits?

It might be either optimizations that reduce possible logits, or randomness in the outputs that are unobservable (until OpenAI frees the logprobs.)

It is already observed that these new turbo models will produce random top-1 token choice flips even when all attempts at determinism are made. Why not a random normalized probability over 1.00000?