I am working with both the embedding and the completion APIs (via openai python lib) and receiving the following error:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x40abba56d0>: Failed to establish a new connection: [Errno 110] Connection timed out'))
At a random cadence, without changing anything about my app. It will get these connection errors 3 times in a row (retrying/backoff I assume) and then finally result in this error: openai.error.APIConnectionError: Error communicating with OpenAI
It happens at random times and then will go away without me changing anything in subsequent requests. Does anyone else experience this? How can I mitigate it? Is it just a fact of using the API?
Hey @spelsbergsam ,
Getting similar challenges which makes our app feel flakey and it’s a little hard to track real issues vs. OpenAI endpoint timeouts.
I’m planning to implement some re-try logic to my implementation to wrap the calls and ensure it’s a little more stable.
I’ve heard talk that ChatGPT is a version of the davinci model and could be deployed in some connected environments. I definitely saw massive response times when both davinci-003 and ChatGPT were launched. So I’d imagine the popularity occasionally knocks their servers over.
If you find any other solution that’s more elegant than re-trying when it fails let me know!
Experiencing the same issue. Getting an ESOCKETTIMEDOUT error when I call the API now. Worked fine up until about 2 days ago, then became intermittent and now isn’t connecting at all. Is anyone experience the same / any solutions? many thanks…
I must admit I’m quite upset. I am perfectly happy if OpenAI wish to limit the usage of ChatGPT since it’s something they are offering for free and as it says… it’s research. However, the API endpoints are now a product we pay for. Since I am currently using them for a project full time. Every time the endpoints decide to become unusable, I essentially am left twiddling my thumbs until further notice. Not cool OpenAI.
Originally my prompt (which was working up until 3 days ago) was around 300 words. I’ve just reduced the prompt to 90 words as an experiment and getting the same time-out errors. I’ve submitted two tickets to OpenAI (yesterday), and no response. I’ve delayed a product launch because of this and am now very nervous about launching at all. Particularly as 1) No response back from Open AI for my tickets (and for a service I am paying for) 2) The status page does not mention any API issues, so are OpenAI even aware of these? Are they even tracking them?
OpenAI are doing some great stuff, but I would suggest, considering the success of Chat, more thought needs to be given to server allocation, and prioritising paying customers. Otherwise, customers will have to start thinking about alternatives. If anybody from OpenAI is monitoring these forum/thread, I/we would really appreciate any update. Thanks…
getting this timeout error on a very important usage of the api. Since Iam using a python api , please make the python api retry again with an exponential backoff strategy.
2023-03-20 08:36:56,717 - summarize-from-spreadsheet-with-chatgpt.py:177 - ERROR - TimeoutException . Please retry this manually:
Traceback (most recent call last):
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/site-packages/urllib3/connectionpool.py”, line 449, in _make_request
six.raise_from(e, None)
File “”, line 3, in raise_from
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/site-packages/urllib3/connectionpool.py”, line 444, in _make_request
httplib_response = conn.getresponse()
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/http/client.py”, line 1348, in getresponse
response.begin()
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/http/client.py”, line 316, in begin
version, status, reason = self._read_status()
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/http/client.py”, line 277, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), “iso-8859-1”)
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/socket.py”, line 669, in readinto
return self._sock.recv_into(b)
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/ssl.py”, line 1241, in recv_into
return self.read(nbytes, buffer)
File “/Users/arugeo/miniconda3/envs/python3.8/lib/python3.8/ssl.py”, line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
This is ridiculous. We are paying customers, some of us are paying hundreds or thousands of $ per month and API randomly gives timeouts for no reason. Status page shows nothing, no response form OpenAI and no obvious easy way to submit tickets.
This section governs your use of services or features that OpenAI offers on an alpha, preview, early access, or beta basis (“Beta Services”). Beta Services are offered “as-is” to allow testing and evaluation.
OpenAI makes no representations or warranties for Beta Services, including any warranty that Beta Services will be generally available, uninterrupted or error-free, or that Content will be secure or not lost or damaged. Except to the extent prohibited by law, OpenAI expressly disclaims all warranties for Beta Services, including any implied warranties of merchantability, satisfactory quality, fitness for a particular purpose, non-infringement, or quiet enjoyment, and any warranties arising out of any course of dealing or usage of trade.