Continuous gpt3 api 500 error: The server had an error while processing your request. Sorry about that!

Same here. Continuous 500 error on any request on payment setup account.

doing the same. trying it 5 times and if not,giving user error after that

The error seems to be resolved now.

Same here. Continuous 500 errors on paid plan:

  "error": {
    "message": "The server experienced an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at if the error persists.",
    "type": "server_error",
    "param": null,
    "code": null

Yes, I have the same issue too.
This is the first time I have experience this issue after 2 years working on the OpenAI API.
I’m going to implement an event driven retry mechanism in place to minimize the damage.


I implemented 3 retries before failing for the same reason.

It’s working ok. Not sure if we get billed for failed calls. But just happy to have a working model.

1 Like

First time on openai community. Wondered if there was a way to report this error, or what I was doing wrong. But then, I used Google to find this page :slight_smile: !!

1 Like

Sadly it seems the error is back again. I also hope I’m not being charged for the retries because I just had almost 20 in a row…

We are getting the same error message for most requests, however with “429 Too Many Requests” code. The actual limits are not met even close.

Hi I have tried sending request to API and it came back with ‘Quota exceeded’ then ‘had an error while processing your request’ on the second account

3 days later tried and it seem to work now ,

Again, not a single request below 1mn,if it succeeds
Location: France

1 Like

The playground of the other models seems to be working, I asked text curie why davinci is not working and it said the below.

Why is davinci not working?

Davinci is not working because it has a virus.

What kind of virus?

Davinci is infected with a virus called Trojan.Horse.

Since when?

Since about a week ago.

Can that be the reason?

1 Like

Hi, I need help with API and prompt execution. In my app currently run 2 prompts. One is short (256 tokens, other is 800) Suddenly the “800” is dead for no reason. OpenAI gives me this feedback: “Raw error for the API ESOCKETTIMEDOUT”. It worked perfectly until recently. I have no idea how to fix this. Any suggestions? Thank you!

Hi, I was using LangChain version 0.0.113 and when calling OpenAIEmbeddings
from langchain.embeddings.openai import and using the function, it worked. When I upgraded to the latest version 0.0.187, I keep getting the error “api 500 error: The server had an error while processing your request. Sorry about that!”. I reverted back to the previous version and it worked again. Hope this helps!

I am currently getting this error. The OpenAI status site ( suggests the servers are operational. Yet I cannot get my script to get past the second OpenAI.ChatCompletion call. Is anyone else having this issue today? Is the only cause of this error on the serverside, and is there nothing I can do on my end? I added into Try/Except statement that waits 3 times for 10 seconds each, but it’s returning the error every time.

I am experiencing the same issue. Running gpt-4-0613 with 5-6k token payloads and function calls, most calls are returning error code 500 or 502, but some get through after a few reattempts.

gpt-3.5-turbo seems to be working without issues though.

I just switched to gpt-3.5-turbo and it worked like you said. However, after a couple of calls it returned this error:

openai.error.RateLimitError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at if the error persists. (Please include the request ID [id] in your message.)

Have you come across this at any point today? I haven’t received this error before (and I really don’t believe I’m making a large amount of requests)

Getting this error for gpt-4-0613 too.

I have made some 500ish light payload requests on gpt-3.5-turbo today and haven’t run into that specific error yet, but I’ve seen it happen on occasion when the server is overloaded (I’m assuming). Best way to handle that is wrapping your API call inside a try/except block with a timer between re-attempts and a max number of tries. 20 seconds usually is enough to get the call to work on the second attempt.

Sadly, this isn’t working for error code 500/502 as most calls are simply not getting through, I’m guessing there are some stability issues today even though status is not showing any problems.

1 Like

Getting this error to,
after some retry it goes through tho.

Would be neath with some information about why this happens, since the status page doesn’t display any.

1 Like