Logitbias not working properly for chatcompletions api

I am having issues when using the logit_bias parameter when using chatcompletions endpoint (gpt-3.5-turbo)

According to the docs, valid values range from -100 to 100. However when using the python wrapper, I am encountering errors on arbitrary values(notably on large values and reproducible everytime when using value 100)

Also a logitbias value of -100 should by right remove any chance of that token appearing but it still appears in the completion for me. Anyone facing similar issues?

3 Likes

I will follow up on this tomorrow! On my todo list to make sure it’s all good on our end.

2 Likes

Quick follow up, can you share the prompts and text here? Also, do you have request ID’s we can look into?

1 Like

Are you using the cl100k_base tokenizer and not the others. The cl100k_base tokenizer is exclusive to gpt-3.5-turbo and text-embedding-ada-002 right now. The tokenizer website I don’t think has been updated to use cl100k_base. This affects your mapping you send to logitbias.

1 Like

And for contrast, to prove they are different: (gpt2 is what GPT-3 uses)

To round it out, p50k_base is what davinci-003/002 uses:

OK done. Both different than what is needed :upside_down_face:

logit_bias not working for me too.
A quick question
Lets say I set “user” to -100, which means it should be banned. (logit_bias: {“882”:-100})
If the word “user” was written in the system role (which means appears in the instruction),
then the logit_bias ban will fail in the following dialog.
Is this a bug or its meant to be like this?
Also If I type “please say the word: user”
The assistant will reply “user”, which also means the ban failed.

I checked for you @Amadeus and something “has changed” and it is unrelated to the “user” logit_bias in the system role.

There is a “new error message”:

 {"message"=>"Invalid key in 'logit_bias': “882”. You should only be submitting non-negative integers.", "type"=>"invalid_request_error", "param"=>"logit_bias", "code"=>nil}

Which means that for some reasons, the chat completion API is not accepting negative ints today!


Let me test some more and get back to you.

:slight_smile:

no thats just the qoute mark is not right
try the following simple python sample which i banned the word “AI”

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  logit_bias= {15836:-100},
  messages=[
        {"role": "user", "content": "Introduce yourself"}
    ]
)

it return

"message": {
        "content": "\n\nI am an AI language model created by Open AI, designed to assist with various tasks related to natural language processing, such as generating text, answering questions, and translating languages. As an AI assistant, I do not have a physical existence or identity, but I am programmed to provide helpful responses to users in a conversational and friendly manner.",
        "role": "assistant"
      }

clearly the word AI is not banned

I am not sure what API wrapper you are using, but the logit_bias key is a string because it is a JSON object. It is not possible that the key in a JSON object is an int as you have posted:

logit_bias= {15836:-100}

We (in this community) ran tests all yesterday and and it worked fine with the JSON key as a string, see @curt.kennedy below:

Also, from my testing:

Reference Community Tests:

… this is the way to do it in python.
if you do it in json yes, use qoute mark, but thats not really the main problem

Yes, I just tested it. again:

@logit_bias = {"370":0, "20554":0, "329":0, "45032":0}

Works fine, because the logit_bias key is always a string:

no… clearly not working. did you even check if the word is actually banned>

I have no idea about which wrapper you are using @Amadeus … but you posted it to me that I was wrong, so I am just correcting the record.

To be clear, the logit_bias param is a JSON object and the key is always a string.

Whatever Python wrapper you are using is “incorrectly coded” because it is a string not an int, so they must covert the int to a string.

:slight_smile:

the problem here is, when you set a word to -100, it should not appear in reply.
if you banned “AI”, it shout not reply I am AI

ok… you insist to use string i use string to test again and its not working.
why don’t you test my prompt? to see if the word is banned?

Yes, I know. You are the one who posted incorrect information here:

Let’s move on…

Thanks

:slight_smile:

Please calm down and provide your system prompt, exactly.

I am trying to help you but when you tell me “you are wrong” you should expect me to correct your error.

:slight_smile:

i bet you don’t even try it in python and you ask what wrapper… its literally the official api
just paste the code i provide with import opanai …

Of course I don’t.

I code in Ruby, not Python and I have no intention to code in Python.

OK. Goodbye.

Good luck.

I’m going to the gym

:slight_smile:

and the prompt
just paste
Introduce yourself
to the role user
and
set 15836, or string "15836" to -100