High values in logit_bias in OpenAI chat completion endpoint params causes error

Just tested this and it is slightly more complex than that because the API describes login_bias as a JSON object, but in fact it an “odd” way of doing things, in my view, but it works.

Without regard to the tokenizer, I simply selected a random token and tested for errors, not if the token was the token expected to effect the completion , which is another topic (which is the “what tokenizer to use” question). I only tested the format of the param object:

Test 1: Single JSON Object

{"2435":0}

Results Above: Passed chat completion API , no errors.

Test 2: Array of JSON objects:

[ {"2435":0,"2431":0} ]

Results Above: Failed chat completion API , error from API:

{message=>[{'2435': 0, '2431': 0}] is not of type 'object' - 'logit_bias', type=>invalid_request_error, param=>nil, code=>nil}

Test 3: Strange Looking logit_bias single JSON object:

{"2435":0,"2431":0} 

Results Above: Passed chat completion API , no errors.

Summary

This test only focused on the format of the login_bias param object, not the result of the param object on the completion.

There was no error (500 or otherwise) when the logit_bias param was formatted as single JSON object.

I expected an array of logit_bias entries to work, since that is the JSON standard of how arrays of objects should be formatted, but that test failed.

Formatting multiple logit_bias entries as a single JSON object worked fine, which was unexpected, as I guessed the array of JSON logit_bias JSON objects would be the proper format.

I repeated these tests multiple times with the same results.

Hope the helps.

:slight_smile:

1 Like