Hi @colemanhindes, were you able to solve this? Working in python too, and getting the same error you mention.
I just tried with the Python module and it also returned a 500 error.
I was able to resolve it by escaping the \n for some reason
res = openai.ChatCompletion.create(
model=“gpt-3.5-turbo”,
messages=a,
max_tokens=50,
temperature=1.1,
frequency_penalty=0.6,
presence_penalty=0.6,
stop=[r"\n"]
)
Although, to be fair, I don’t think it’s quite necessary. I haven’t been using any stops and haven’t noticed any issues. Not to mention that it’s not even an option in the playground
Thank you, that works!
The problem I’m facing is that its answers tend to be very long, and because of that I hit the token limit within five or six interactions.
Hi folks, I’m an engineer at OpenAI. This is a known bug and something we’re working on fixing. It is slightly complicated due to the way we are parsing the model’s response into the message format (the parsing logic ascribes some special meaning to new lines, so if you specify a new line as a stop token, it interferes with parsing). We have an idea of how to fix this, will share an update here once it’s fixed.
In the meantime, could you share some product use cases as to why you all use \n as a stop token? What are you trying to achieve with it?
@RonaldGRuckus Thanks for sharing, I think maybe this avoids the error but is not actually matching the newlines.
Thanks for the insight Atty, I want to use \n as a stop sequence for the reasons mentioned in this thread Prompt engineering question with chatGPT turbo API
Admittedly I didn’t actually test it.
It is slightly complicated due to the way we are parsing the model’s response into the message format
Would lead me to believe there may not be a workaround for now
In the interactive dialogue system, we only need the next-turn dialogue. Sometimes, model generates multi-turn dialogue. that’s why we use ‘\n’ for a stop token.
Hi @saintkyumi
Another issue which could be a play here is that the newline char \n
is generally only interpreted as a newline when it is used with double-quotes, like this "\n"
.
When you use single quotes as you have done like this '\n'
most systems that I am familiar with over 40 decades of coding will interpret this as literally “backslash n” and not a newline.
So, in the interim, you might try changes your single quotes to double quotes. That is how all my stops are sent to the API and how all the OpenAI examples I have seen are written (that I can remember), FYI.
Hope this helps.
Hi @ruby_coder
thanks for your suggestion.
500 error keeps appearing regardless of single quote or double quote.
As @atty-openai mentioned, newline stop itself causes error in the backend side.
That may well be, but I have tried repeatedly to confirm this error, including just now, and I cannot get an newline to cause any error.
Example Setup using newline as stop
Example completion (works fine)
FWIW
I have tried many (at least 10) times with newlines as stops and turbo as the model and have never experienced an error. However, in practice I do not use newlines as stops.
HTH
This bug is not limited to newline characters, it happens on the ChatCompletion endpoint whenever the first token generated is a stop token; I ran into it last week but wasn’t sure how to report it.
It can be easily reproduced by setting the logit_bias of any stop token to 100:
openai.ChatCompletion.create(
model = "gpt-3.5-turbo",
messages = [{"role": "user", "content": "hello"}],
temperature = 0,
logit_bias = {12340: 100},
stop = ["!!!"],
)
And why would anyone earnestly writing API calls submit a logit_bias
for a stop?
The bug occurs when a stop token is generated at the start of a response, logit_bias is used here to demonstrate how to consistently reproduce the error to help OpenAI track down what’s causing it on their end.
Thanks for the thread; I set my stop sequence to something else and it fixed the problem. But now
I’m getting similar errors with gpt-4, but they seem to be frequent no matter what I set stop to. I thought it might be due to rate limits, but I don’t always get the RateLimitError. And I don’t have this problem in every application. Pretty baffling.
We had a fairly comprehensive discussion on this logit_bias
== 100 in another topic, and our testing showed that the bug was because the API has trouble with high values of 'logit_bias`, stop or not.
Example 1: Hello World, Stop “####”, Logit Bias World, 0
Example 1: Results OK
Example 2: Hello World, Stop “####”, Logit Bias World, 100
Example 2: Results: Pesky Timeout Error Again:
This happens today with large values of logit_bias
so I’ll table these tests for now until the models are preforming better.
Note: As this is a lab, I want to see the timeout errors, so I have no internal timeout or retry code active (not production).
Example 3 Results: Hello World, Stop “####”, Logit Bias World, -50 (OK)
Example 4 Results: Hello World, Stop “####”, Logit Bias World, 50
This is consistent with earlier results, where large positive logit_bias
values are problematic.
You are correct, evidently I just happened to uncover a second bug with logit_bias while trying to use it to demonstrate the first one. What can I say, I’m good at breaking things.
However, since I do not use a line break as a stop token or logit_bias in the code that keeps triggering this error I’ve done some more testing and it looks like any stop token that simply begins with a line break will cause this.
Here’s a revised example to reproduce the bug, which is still present as of today:
openai.ChatCompletion.create(
model = "gpt-3.5-turbo",
messages = [
{"role": "user", "content": "Marco"},
],
max_tokens = 5,
temperature = 0,
stop = ["\nPolo"],
)
I’m good at analysis, coding, building, debugging and testing… so we match
Here is your example above, tested with your same params:
The Chat Setup
The Results: No Error (Success)
HTH
See Also, Appendix:
Same Params, but Using Stop Array:
Results: Success
Try using the API directly instead of through a web interface, if I had to guess the input for the stop token is escaping the backslash and treating it as a literal instead of a newline character.
OK !!!
Will test from the command line in the Rails console tomorrow.