When using logit_bias, it seems that all models (including fine-tuned models) just return gibberish. I have code that I’ve been using for several months without issue, but suddenly have to remove logic-bias in order for it to work. Anyone else having this issue?
The “not working” can be you using the wrong token numbers.
gpt-4o uses a different token encoder, and you will have to go through the process again of finding out which token numbers form the output you want to reduce or encourage.
@_j - You’re right about the wrong token IDs for 4o. But still, this wasn’t working for any of the models yesterday. I guess once they release the tokenizer for 4o it will become clear…