Hi all, I’ve worked a bunch with the older GPT-3 apis, trying to upgrade to the new turbo API, but I seem to get this issue when I set the “stop” param to [‘\n’], any idea what’s going on here? Bug or am I doing something wrong?
Issue goes away when I remove the stop parameter
generator = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
],
max_tokens=100,
temperature=0.7,
stop=['\n']
)
print(response)
(env) (base) ubuntu@150-136-40-25:~/coleman/salesnova$ python supernova/test_chat.py
Traceback (most recent call last):
File "/home/ubuntu/coleman/salesnova/supernova/test_chat.py", line 6, in <module>
generator = openai.ChatCompletion.create(
File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/home/ubuntu/coleman/salesnova/env/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.APIError: The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID b66f7b76a7b33ccdfa2175632eecabf0 in your email.) {
"error": {
"message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID b66f7b76a7b33ccdfa2175632eecabf0 in your email.)",
"type": "server_error",
"param": null,
"code": null
}
}
500 {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID b66f7b76a7b33ccdfa2175632eecabf0 in your email.)