Random errors when accessing ChatGPT API like Request Time out, ServiceUnavailableError, RateLimitError, APIConnectionError

I have been trying to use chatGPT with langchain implementation. I am getting random errors from Open AI API like:

Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
ServiceUnavailableError: The server is overloaded or not ready yet..

RateLimitError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists.
APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')).

I am not sure what the issue is. I thought the openAI is down, but the status shows all green and operational.

Is there any expert who can help me understand what the real issue is? I would love if anyone from OpenAI team can help in resolving the issue.


I had several issues attempting to use langchain with openai. (mostly token exhausted)
Ran out of tokens implementing examples of “conversations” with vectordb.

VectorDBQA is deprecated - please use from langchain.chains import RetrievalQA
So I tried RetrievalQA but then moved on to ConversationalRetrievalChain.
Now I don’t exhaust tokens. FYI.

When processing lots of files for definitions (just openai api directly no langchain) I would exhaust the rate limits for my account just running full bore, so I had to add sleeps to not have my methods overrunning my account’s allowed rate.

Also note, api limits are NOT the same as chatgpt app limits. Also different models have different token limits and different throttling.

1 Like