I am using ‘train_3_motive_en.csv’ model to generate some texts since yesterday. Yesterday it ran well. However, I am getting this error for the last 7 hours:
openai.error.APIConnectionError: Error communicating with OpenAI: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
I’m using the chat completion endpoint, currently with the gpt-3.5-turbo. Before yesterday, for a few weeks now, I’ve been running dozens of requests every hours without any issues. Yesterday, about half of them started failing with the same APIConnectionError that’s been reported here. Today, most of them, around 80% of the requests are failing with that error.
Hi @AgusPG
I agree with you, catching generic expression is a bad engineering practice and it can have pretty bad consequences if done in software developing. My solution was more suitable for an NLP researcher looking for a quick fix and who just wants the output results to analyse offline. Being a researcher but with a previous developer background I see what you mean, but I’ve also seen a lot “worse” in research code than just catching a generic expression, in order to get things to work in the short term. Definitely not advisable for a scalable application, and if you have a live app with real users then my solution is not for you. And thanks for pointing out aiohttp_retry, I’ll look into it
I’m also still getting tons of connection errors, timeouts and 502s, even after yesterday’s fix. Backoff helps, but my requests often retry 3+ times before my serverless functions time out…
Oh yeah, absolutely. If you can work offline and do not need real-time, I agree that you can be more flexible as regards the software development part of your app .
One observation I’m curious if any of you have witnessed…
Context: I’m using the text completion API (not chat) and my application built to iterate through various text, calling the API each time.
Observation: When I run this app/code it will work for the first 4-9 API calls, executing each in < 1 s, and then subsequent API calls will either be extremely slow (>90s) or fail with the exception in this thread.
Has anyone else seen this behavior? It seems like there is some unofficial throttling going on.
I am receiving this issue as well. Worse still, despite the API closing my connection without response I have been charged regardless!!
I’m using GPT-4 and getting close to the maximum token limit, which I believe has something to do with it. When I just run the test code that the API documentation suggests it runs flawlessly. Very strange!
I’m also running into a similar issue. It seems that the connection remains open for some time in between the requests instead of being closed. (This is just a guess, I still have to yet to audit the system to confirm). It’s very reproducible with my project, and it seems to happen when I let the program idle for a few minutes. Initializing the first request hasn’t been an issue. Continuing the conversation after a pause has thrown me an error.
I could have sworn there was some verbiage I read somewhere about terminating the connection manually, though I can’t find it. Is anyone familiar with what I’m talking about?
To add to this, if I change the parameters of the request to completely remove any past context to the history of the chat, then I don’t get the error anymore. My requests range anywhere from 50 tokens to 4000
Receiving the same error but on the Moderation endpoint. Super intermittent. Just leaving this here so no one else goes crazy trying to find out what’s wrong with their code.
Error communicating with OpenAI: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
I had the same error.
Simple prompts like “Hello.” worked with system messages over 6k tokens using gpt-4. Couldn’t get any answer to question prompts, with the following error message:
openai.error.APIConnectionError: Error communicating with OpenAI: (‘Connection aborted.’, OSError(0, ‘Error’))
I think that you should also check, if you’re using response_data = json.loads(completion.choices[0].message['content']) maybe change it to response_data = completion.choices[0].message['content'] that seemed to fix my issue
Are there any known issues with the OpenAI Python library when using the API with the stream=True parameter? I’m consistently encountering the following error:
“Must provide an ‘engine’ or ‘deployment_id’ parameter to create a <class ‘openai.api_resources.chat_completion.ChatCompletion’>”.
I would greatly appreciate any suggestions or guidance regarding this. I’m unsure if I’m making any mistakes in my implementation.
I am accessing OpenAI services from Germany (no VPN) and I contantly get this error whenever the size of the prompt is over ~1250 characters (~250 tokens calculated with the Tokenizer tool).
I have a paid account and for smaller prompts the code (derived from the quick start example app) can establish a connection and even receives a valid response.
Any suggestions? I am struggling with this issue for over a month now.
P.S.: the first time when I used this code, it actually worked. Unfortunately, I am unable to recreate that initial successful run, which leads me to belive something changed on the server side.
Are there some sort of “Blocked” / “Throttled” lists maintained by OpenAI?
There is an open issue in github for “openai-python” library (issue number 371). Sorry can’t add the link in message
It describes when you can have this problem if you are using python SDK:
There is performance optimisation to use Session (to call api using the same connection), as a side effect if connection is not used for some time then server will close it and for the next request in this session you’ll get error: Error communicating with OpenAI: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
There is more details in that github issue and example with retries as workaround.