API: error in streaming mode

I’ve been getting this error when I am using either the ChatCompletion or Completion API in streaming mode –

openai.error.APIError: The server had an error while processing your request. Sorry about that! (Error occurred while streaming.)

.venv/lib/python3.11/site-packages/openai/api_requestor.py:687: APIError

I didn’t change anything in the code and I use the standard openai-cookbook recipe for streaming (for event in response...). Started getting this error today.

Anyone else seeing this?

3 Likes

Also been getting this every other request all day.

1 Like

I’ve been a lot of these as well today w/streaming and gpt-4-0613. I have not tested other models yet.

2 Likes

yes it’s specifically with streaming, although I think the response is generally slow and streaming may be the most affected by it.
All my tests pass when I turn off streaming

I am also seeing large number of SSE

{‘error’: {‘message’: ‘The server had an error while processing your request. Sorry about that!’, ‘type’: ‘server_error’, ‘param’: None, ‘code’: None}}

with “gpt-4-0613” today.

1 Like

Same, I get a lot of these errors. I am using the functions call with the API.

gpt-4-0613 here as well. Help me. :frowning:

[2023-06-20 08:28:24] server.DEBUG: Exception: openAI streaming ERROR: The server had an error while processing your request. Sorry about that!

1 Like

I have the same problem. Using gpt-4-0613

I got this error constantly today.
The server had an error while processing your request. Sorry about that! (Error occurred while streaming.)

I am also seeing a lot of this for the past ~24 hours with streaming gpt-4-0613. Disappears when I switch back to gpt-4-0314.

Same here. Any updates on the cause or when it will be fixed?

Same here. Since yesterday, I can’t use the gpt-4-0613 model normally…

someone to solve it?

:rofl:It’s the same. It’s too painful.

Same here. Using gpt-4-0613.

However, if you rely on https://status.openai.com/ there are no issues today or yesterday - all is green.

Is there a way to get actual info or warnings about system downtime or overload?

Also, https://platform.openai.com/docs/guides/error-codes/api-errors suggests that for error “500 - The server had an error while processing your request” the solution would be “Retry your request after a brief wait and contact us if the issue persists.”

Does anyone know what is the official way to contact open ai? There’s no contact info anywhere and the embedded bot is not useful at all.

4 Likes

Appears to be working now (gpt-4-0613).

Yep, I tried that and it’s back to work. Damn

I’ve been experiencing this on every request I make using gpt-3.5-turbo-16k. But it works whenever I change the model.