Logitbias not working properly for chatcompletions api

so simple, and just try it. i will not trying to debate which one is correct with you, why not try both.

expect me to correct your error

He’s not wrong though
In Using logit bias to define token probability | OpenAI Help Center
You can see the example given by OpenAI

openai.Completion.create(
 engine="davinci",
 max_tokens=50,
 temperature=0,
 prompt = "Once upon a",
 logit_bias={2435:-100, 640:-100},
)

So your claim

To be clear, the logit_bias param is a JSON object and the key is always a string.

Whatever Python wrapper you are using is “incorrectly coded” because it is a string not an int, so they must covert the int to a string.

Is actually wrong

No. It is not wrong.

Your Python API wrapper is poorly written.

Don’t you understand basic software good coding practices? The API calls for a JSON object. Your Python wrapper is incorrectly written to accept an integer as a JSON key and if you look inside that Python API wrapper, you will see that the developer then changes that integer to a string before the wrapper sends it to the API. This is a very poor coding practice.

Yes, your wrapper works using an int because the coder compensates for the error by mapping the int to a string in the code which is NOT visible to you.

I use a better written API wrapper which does not mismatch param types.

Also, I tested this using my OpenAPI lab just now, using:

   @logit_bias = {"882":0}

and it works fine.

If I change it to:

    @logit_bias = {"882":-100}

It still works fine, no errors:

So, at least for me, I have no problem with this:

   @logit_bias = {"882":-100}

as my best guess is that OpenAI was working on the system and the errors before were transitory.

Take care of yourself guys.

I wish you the best of luck with your poorly written Python API wrapper.

The Ruby ruby_openai OpenAI API wrapper does not mistype JSON keys because it is better written that the Python API wrapper you are using.

Why argue about it? I don’t have any errors on my end.

:slight_smile:

Can you test

@logit_bias = {"15836":-100}

and then ask the assistant

Introduce yourself

If the word “AI” appears in assistant reply, then it means the logit_bias is not working fine. (Because 15836 is the token for “AI”, setting it to -100 should ban the word)
It may not return any error code, but its actually not functioning.

For clearification, @sharkcat who open this thread is asking two different question
1.when set to 100 → error
2.when set to -100 → not functioning
You are talking about question 1
while me and @Amadeus are talking about question 2
that’s the reason why we are talking past each other

Also as you can see in the screenshot you paste when you test

@logit_bias = {"882":-100}

Your assistant reply:

Here is a Ruby method that sorts an array of user...

882 is the token for user, setting it to -100 meant it should not appear in assistant’s reply
So seems like logit_bias is also not functioning on your end.

We had this discussion already @AI.Dev and we know the OpenAI logit_bias for chat has issues.

The issues are on the API side.

You can seach the site for that discussion.

HTH

:slight_smile:

I know that thread but it only discuss about 1.when set to 100 → error
We are now discussing another issue 2.when set to -100 → not functioning
Hope you understand.
Also hope the dev can resolve these two problem asap @logankilpatrick we have provide the code to reproduce the issue 2

1 Like

request ID: chatcmpl-6rfx5jx1FqUKOOF0sRnsjKA33bHOK

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  logit_bias= {"15836":-100},
  temperature= 0,
  messages=[{"role": "system", "content": "Introduce yourself"}]
)

Response

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "\n\nHello, I am an AI language model created by Open AI. I am designed to assist with various tasks such as answering questions, generating text, and providing information. As an AI language model, I do not have a physical form, but I am always ready to help with any queries you may have.",
        "role": "assistant"
      }
    }
  ],
  "created": 1678250667,
  "id": "chatcmpl-6rfx5jx1FqUKOOF0sRnsjKA33bHOK",
  "model": "gpt-3.5-turbo-0301",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 63,
    "prompt_tokens": 10,
    "total_tokens": 73
  }
}

maybe I’ll open a new thread later with detail reproduce.

1 Like

Agree 100%

I guess they OpenAi are working on this because the logit_bias param has been causing strange, intermittent and seemingly nonsense API errors in the past 12 hours.

:slight_smile:

Does this function still not work? I tried using curl, Python, and copying & pasting from here: [https://help.openai.com/en/articles/5247780-using-logit-bias-to-define-token-probability], but it doesn’t work for me.