I’ve been getting this error when I am using either the ChatCompletion or Completion API in streaming mode –
openai.error.APIError: The server had an error while processing your request. Sorry about that! (Error occurred while streaming.)
.venv/lib/python3.11/site-packages/openai/api_requestor.py:687: APIError
I didn’t change anything in the code and I use the standard openai-cookbook recipe for streaming (for event in response...). Started getting this error today.
yes it’s specifically with streaming, although I think the response is generally slow and streaming may be the most affected by it.
All my tests pass when I turn off streaming
{‘error’: {‘message’: ‘The server had an error while processing your request. Sorry about that!’, ‘type’: ‘server_error’, ‘param’: None, ‘code’: None}}
However, if you rely on https://status.openai.com/ there are no issues today or yesterday - all is green.
Is there a way to get actual info or warnings about system downtime or overload?
Also, https://platform.openai.com/docs/guides/error-codes/api-errors suggests that for error “500 - The server had an error while processing your request” the solution would be “Retry your request after a brief wait and contact us if the issue persists.”
Does anyone know what is the official way to contact open ai? There’s no contact info anywhere and the embedded bot is not useful at all.