`logit_bias` not working as it did before

When using logit_bias, it seems that all models (including fine-tuned models) just return gibberish. I have code that I’ve been using for several months without issue, but suddenly have to remove logic-bias in order for it to work. Anyone else having this issue?

(PS- I’ve raised this with support already.)

1 Like

Seems to be working again today. Weird. Still waiting on a reply from support.

1 Like

Works with gpt-3.5-turbo, gpt-4, and gpt-4-turbo, but doesn’t work with gpt-4o. Already raised with support.

The “not working” can be you using the wrong token numbers.

gpt-4o uses a different token encoder, and you will have to go through the process again of finding out which token numbers form the output you want to reduce or encourage.

1 Like

@_j - You’re right about the wrong token IDs for 4o. But still, this wasn’t working for any of the models yesterday. I guess once they release the tokenizer for 4o it will become clear…

You can encode and decode tokens with tiktoken.

You can experiment directly in this interface: Tiktokenizer

You’re awesome. Thank you!