Stop=['\n'] in "gpt-3.5-turbo" leads to 500 error

Hi all, I’ve worked a bunch with the older GPT-3 apis, trying to upgrade to the new turbo API, but I seem to get this issue when I set the “stop” param to [‘\n’], any idea what’s going on here? Bug or am I doing something wrong?

Issue goes away when I remove the stop parameter

generator = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ],
    max_tokens=100,
    temperature=0.7,
    stop=['\n']
)
print(response)
(env) (base) ubuntu@150-136-40-25:~/coleman/salesnova$ python supernova/test_chat.py 
Traceback (most recent call last):
  File "/home/ubuntu/coleman/salesnova/supernova/test_chat.py", line 6, in <module>
    generator = openai.ChatCompletion.create(
  File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
    raise self.handle_error_response(
openai.error.APIError: The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID b66f7b76a7b33ccdfa2175632eecabf0 in your email.) {
  "error": {
    "message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID b66f7b76a7b33ccdfa2175632eecabf0 in your email.)",
    "type": "server_error",
    "param": null,
    "code": null
  }
}
 500 {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID b66f7b76a7b33ccdfa2175632eecabf0 in your email.)
1 Like

Hi @colemanhindes

You might be making some other error or mistake.

The stop you are using works fine, as I just tested it just for you with your messages:

Chat Setup

Completion

Hope the helps.

:slight_smile:

thanks for the quick response, would you be able to check the request ID I shared in the post?

I have no access to your OpenAI data.

Sorry @colemanhindes

:slight_smile:

I also get 500 error only when using stop=‘\n’ or stop=[‘\n’]. Does gpt-3.5-turbo disallow newline string(‘\n’) for stop token? It works for text-davinci-003 though.

No. It works fine (for me). Please see my reply above.

:slight_smile:

@ruby_coder what tools are you using? I want to reproduce your result. I was using python and curl command. still no luck for me.

I wrote that tool using Ruby on Rails and the underlying wrapper is a Ruby gem called ruby-openai which is very well written, super reliable, and actively maintained.

HTH

:slight_smile:

1 Like

I see so maybe it works in ruby but not python, anyone at OpenAI that can look into this?

I was able to recreate the 500 error but I don’t understand why, if its my code or not, cause I can play around with syntax to get it to work.


Ps okay ill stop editing my post now :smiley:

Hi @colemanhindes, were you able to solve this? Working in python too, and getting the same error you mention.

I just tried with the Python module and it also returned a 500 error.

I was able to resolve it by escaping the \n for some reason

res = openai.ChatCompletion.create(
model=“gpt-3.5-turbo”,
messages=a,
max_tokens=50,
temperature=1.1,
frequency_penalty=0.6,
presence_penalty=0.6,
stop=[r"\n"]
)

Although, to be fair, I don’t think it’s quite necessary. I haven’t been using any stops and haven’t noticed any issues. Not to mention that it’s not even an option in the playground

3 Likes

Thank you, that works!

The problem I’m facing is that its answers tend to be very long, and because of that I hit the token limit within five or six interactions.

Hi folks, I’m an engineer at OpenAI. This is a known bug and something we’re working on fixing. It is slightly complicated due to the way we are parsing the model’s response into the message format (the parsing logic ascribes some special meaning to new lines, so if you specify a new line as a stop token, it interferes with parsing). We have an idea of how to fix this, will share an update here once it’s fixed.

In the meantime, could you share some product use cases as to why you all use \n as a stop token? What are you trying to achieve with it?

5 Likes

@RonaldGRuckus Thanks for sharing, I think maybe this avoids the error but is not actually matching the newlines.

Thanks for the insight Atty, I want to use \n as a stop sequence for the reasons mentioned in this thread Prompt engineering question with chatGPT turbo API

Admittedly I didn’t actually test it.

It is slightly complicated due to the way we are parsing the model’s response into the message format

Would lead me to believe there may not be a workaround for now

In the interactive dialogue system, we only need the next-turn dialogue. Sometimes, model generates multi-turn dialogue. that’s why we use ‘\n’ for a stop token.

Hi @saintkyumi

Another issue which could be a play here is that the newline char \n is generally only interpreted as a newline when it is used with double-quotes, like this "\n".

When you use single quotes as you have done like this '\n' most systems that I am familiar with over 40 decades of coding will interpret this as literally “backslash n” and not a newline.

So, in the interim, you might try changes your single quotes to double quotes. That is how all my stops are sent to the API and how all the OpenAI examples I have seen are written (that I can remember), FYI.

Hope this helps.

:slight_smile:

Hi @ruby_coder
thanks for your suggestion.
500 error keeps appearing regardless of single quote or double quote.

As @atty-openai mentioned, newline stop itself causes error in the backend side.