I am not sure what API wrapper you are using, but the logit_bias key is a string because it is a JSON object. It is not possible that the key in a JSON object is an int as you have posted:

logit_bias= {15836:-100}

We (in this community) ran tests all yesterday and and it worked fine with the JSON key as a string, see @curt.kennedy below:

Also, from my testing:

Reference Community Tests:

… this is the way to do it in python.
if you do it in json yes, use qoute mark, but thats not really the main problem

Yes, I just tested it. again:

@logit_bias = {"370":0, "20554":0, "329":0, "45032":0}

Works fine, because the logit_bias key is always a string:

no… clearly not working. did you even check if the word is actually banned>

I have no idea about which wrapper you are using @Amadeus … but you posted it to me that I was wrong, so I am just correcting the record.

To be clear, the logit_bias param is a JSON object and the key is always a string.

Whatever Python wrapper you are using is “incorrectly coded” because it is a string not an int, so they must covert the int to a string.

:slight_smile:

the problem here is, when you set a word to -100, it should not appear in reply.
if you banned “AI”, it shout not reply I am AI

ok… you insist to use string i use string to test again and its not working.
why don’t you test my prompt? to see if the word is banned?

Yes, I know. You are the one who posted incorrect information here:

Let’s move on…

Thanks

:slight_smile:

Please calm down and provide your system prompt, exactly.

I am trying to help you but when you tell me “you are wrong” you should expect me to correct your error.

:slight_smile:

i bet you don’t even try it in python and you ask what wrapper… its literally the official api
just paste the code i provide with import opanai …

Of course I don’t.

I code in Ruby, not Python and I have no intention to code in Python.

OK. Goodbye.

Good luck.

I’m going to the gym

:slight_smile:

and the prompt
just paste
Introduce yourself
to the role user
and
set 15836, or string "15836" to -100

so simple, and just try it. i will not trying to debate which one is correct with you, why not try both.

expect me to correct your error

He’s not wrong though
In Using logit bias to define token probability | OpenAI Help Center
You can see the example given by OpenAI

openai.Completion.create(
 engine="davinci",
 max_tokens=50,
 temperature=0,
 prompt = "Once upon a",
 logit_bias={2435:-100, 640:-100},
)

So your claim

To be clear, the logit_bias param is a JSON object and the key is always a string.

Whatever Python wrapper you are using is “incorrectly coded” because it is a string not an int, so they must covert the int to a string.

Is actually wrong

No. It is not wrong.

Your Python API wrapper is poorly written.

Don’t you understand basic software good coding practices? The API calls for a JSON object. Your Python wrapper is incorrectly written to accept an integer as a JSON key and if you look inside that Python API wrapper, you will see that the developer then changes that integer to a string before the wrapper sends it to the API. This is a very poor coding practice.

Yes, your wrapper works using an int because the coder compensates for the error by mapping the int to a string in the code which is NOT visible to you.

I use a better written API wrapper which does not mismatch param types.

Also, I tested this using my OpenAPI lab just now, using:

   @logit_bias = {"882":0}

and it works fine.

If I change it to:

    @logit_bias = {"882":-100}

It still works fine, no errors:

So, at least for me, I have no problem with this:

   @logit_bias = {"882":-100}

as my best guess is that OpenAI was working on the system and the errors before were transitory.

Take care of yourself guys.

I wish you the best of luck with your poorly written Python API wrapper.

The Ruby ruby_openai OpenAI API wrapper does not mistype JSON keys because it is better written that the Python API wrapper you are using.

Why argue about it? I don’t have any errors on my end.

:slight_smile:

Can you test

@logit_bias = {"15836":-100}

and then ask the assistant

Introduce yourself

If the word “AI” appears in assistant reply, then it means the logit_bias is not working fine. (Because 15836 is the token for “AI”, setting it to -100 should ban the word)
It may not return any error code, but its actually not functioning.

For clearification, @sharkcat who open this thread is asking two different question
1.when set to 100 → error
2.when set to -100 → not functioning
You are talking about question 1
while me and @Amadeus are talking about question 2
that’s the reason why we are talking past each other

Also as you can see in the screenshot you paste when you test

@logit_bias = {"882":-100}

Your assistant reply:

Here is a Ruby method that sorts an array of user...

882 is the token for user, setting it to -100 meant it should not appear in assistant’s reply
So seems like logit_bias is also not functioning on your end.

We had this discussion already @AI.Dev and we know the OpenAI logit_bias for chat has issues.

The issues are on the API side.

You can seach the site for that discussion.

HTH

:slight_smile:

I know that thread but it only discuss about 1.when set to 100 → error
We are now discussing another issue 2.when set to -100 → not functioning
Hope you understand.
Also hope the dev can resolve these two problem asap @logankilpatrick we have provide the code to reproduce the issue 2

1 Like

request ID: chatcmpl-6rfx5jx1FqUKOOF0sRnsjKA33bHOK

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  logit_bias= {"15836":-100},
  temperature= 0,
  messages=[{"role": "system", "content": "Introduce yourself"}]
)

Response

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "\n\nHello, I am an AI language model created by Open AI. I am designed to assist with various tasks such as answering questions, generating text, and providing information. As an AI language model, I do not have a physical form, but I am always ready to help with any queries you may have.",
        "role": "assistant"
      }
    }
  ],
  "created": 1678250667,
  "id": "chatcmpl-6rfx5jx1FqUKOOF0sRnsjKA33bHOK",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 63,
    "prompt_tokens": 10,
    "total_tokens": 73
  }
}

maybe I’ll open a new thread later with detail reproduce.

1 Like