Recurring "Max retries exceeded" error with long prompts using ChatGPT API

When I use chatgpt’s api, the answer can be returned normally. But when the prompt and the output result are both long text, the following error will occur:

File “D:\miniconda3\envs\py310\lib\http\client.py”, line 287, in _read_status
raise RemoteDisconnected(“Remote end closed connection without”
http.client.RemoteDisconnected: Remote end closed connection without response

The above exception was the direct cause of the following exception:

urllib3.exceptions.ProxyError: (‘Unable to connect to proxy’, RemoteDisconnected(‘Remote end closed connection without response’))

File “D:\miniconda3\envs\py310\lib\site-packages\openai\api_requestor.py”, line 609, in request_raw
raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host=‘api.openai.com’, port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError(‘Unable to connect to proxy’, RemoteDisconnected(‘Remote end closed connection without response’)))

For example, I want to pass an article with about 4000 tokens to the model(gpt-3.5-turbo-16k) so that it can check for typos and output the modified article. , but at this time, he will report this error. But other short text output answers will not report this error.
I have tried increasing request timeouy, bypassing the proxy and accessing the API directly, but the problem still occurs indirectly.
Has anyone had a similar problem with the ChatGPT API? What solution did you use to solve or fix the problem? I would be very grateful if any suggestions.

1 Like

Hi and welcome to the developer forum!

Can you post the code that is making the API calls and any support code it relies on?

def ask_question(messages, question):
messages.append({“role”: “user”, “content”: question})
response = openai.ChatCompletion.create(model=“gpt-3.5-turbo-16k”, temperature=0.1 , messages=messages,timeout=(300, 300))
answer = response[‘choices’][0][‘message’][‘content’]
messages.append({“role”: “assistant”, “content”: answer})
return answer

messages=[
{“role”: “system”, “content”: “You are a censor whose main job is to review some conversations, speeches, roundtable discussions or news, find typos and correct them.”},
]
question = {my news content} + ‘\n’ + ‘Are there any typos in this article? If so, please correct the typos and re-output the article’

answer = ask_question(messages, question)
print(“assistant:”, answer)

The above is the main code,the error code line is reflected in ChatCompletion.create().

Traceback (most recent call last):
File “c:\Users\TD\Desktop\python\GPT_QA\main copy.py”, line 86, in
answer = ask_question(messages, question)
File “c:\Users\TD\Desktop\python\GPT_QA\main copy.py”, line 57, in ask_question
response = openai.ChatCompletion.create(model=“gpt-3.5-turbo-16k”, temperature=0.1 , messages=messages,timeout=(300, 300))
File “D:\miniconda3\envs\py310\lib\site-packages\openai\api_resources\chat_completion.py”, line 25, in create
return super().create(*args, **kwargs)
File “D:\miniconda3\envs\py310\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py”, line 153, in create
response, _, api_key = requestor.request(
File “D:\miniconda3\envs\py310\lib\site-packages\openai\api_requestor.py”, line 288, in request
result = self.request_raw(
File “D:\miniconda3\envs\py310\lib\site-packages\openai\api_requestor.py”, line 609, in request_raw
raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host=‘api.openai.com’, port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError(‘Unable to connect to proxy’, RemoteDisconnected(‘Remote end closed connection without response’)))

Thanks, the error seems to be with the proxyconnect call, so it’s between the code and the server, Is this running on a firewalled machine? vpn? behind a nat? some networking issue that would prevent the application making a connection to OpenAI, a test would be to CURL from the command line on that machine (if you have CLI access) to be fair even a simple ping to the OpenAI API endpoint would be good enough to check for basic connection problems.

Yeah,It runs on my computer via VPN. What I am wondering is that if the proxy’s error should not be that all the questions have this error and cannot connect and return the answer? But now I only get this error on specific questions (like long text). :worried:

VPN’s are known to be problematic with accessing OpenAI, disabling that should remedy the problem, assuming the server is located in a supported country.

However,it is necessary for me to use VPN. Even though this error is for a specific issue like long text issue, is it still related with VPN?

It seems to be, the error is not related to the message payload, it is regarding connection to the server.

This is the error – you have some local network configuration problem.
Try a very short request, and see if that works. If that works, then the proxy parameters are right, but the proxy itself has some limitations built in.
If the short request also doesn’t work, then the problem is that your local Python code does not have the correct proxy parameters.

The short request works.But long text request can not and throw this error.

This error has never happened before, it’s really frustrating. :face_exhaling:

If the short request works, but not the long request, and the long request is not TOO long (as in, longer than the model allows,) then the problem is with the configuration of your proxy, which is likely something on the inside of your VPN provider if you’re going through the VPN.

I would recommend trying the same request without a VPN (perhaps on a mobile hotspot) to see what result you get, and compare. Or, if that’s still not possible, at least try a totally different VPN provider/software.
This is almost certainly something in the chain between you and openAI, rather than something openAI can do anything about.

OK! I will try according to your method, thank you for your patient answer! :smiley:

Have you solved this problem, I have the same issue.

I solved it by using stream.

response = openai.ChatCompletion.create(model=model, temperature=temperature, messages=messages)

change:

response = openai.ChatCompletion.create(model= model , temperature=temperature , messages=messages, stream=True)

No similar errors have ever occurred through streaming response.