Openai Api Error "The server had an error while processing your request. Sorry about that"

I am using pay as you go billing option for OpenAI. I persistently encounter openai.error.The server had an error while processing your request. Sorry about that while experimenting with model=“text-davinci-003".

why am i getting this error


Everyone is getting this error and OpenAI has not responded in almost 72 hours.

Not everyone :slight_smile:

I have three OpenAI API apps in production and none of these production API apps get this (or any) errors.

Maybe I’m just lucky :crossed_fingers:

You must be considering there are dozens of responses from people who are. I tried your technique, used a 5 second delay, and still 30-50% of the outputs have this error.

Testing with a 10-second delay now, it doesn’t matter. The percentage of errors does not change no matter how long of a delay you use.

A program I’d written nearly 6 months ago that was working in rock solid fashion now consistently errors out - the rate at which the requests are put through seems irrelevant. Even a single request isolated in a 5 minute window with a max tokens of 512 to davinci-003 returns “Server error” about 30% of the time. I wonder if its because davinci-003 and ChatGPT are sharing compute, but this is pretty rough.

Yes everyone is having the same error, it’s been happening for ~72 hours now; and we are still waiting on a response from the support team.

Yeah, I’m not privy into the OpenAI infrastructure, but my best guess is that adding 100 million users since December 2023 has really hurt the infrastructure.

Also, OpenAI quickly rolled out the “paid” versions, and I got the feeling that the work flow and internal processes for rate limiting in both the OpenAI cloud and in Cloudflare are fragile, at best.

So, like many companies, OpenAI is in a firefight right now trying to make money from users so they can invest more. Even the recently announcements of secured funding will not immediately solve these problems.

Also, it may be possible that their are “per country” quotas set up behind the scene (I once set this up for an app), and so since I am developing and testing from a supported “small country in Asia, with a relative small tech base” my interactions (API or direct) may be different than in large countries with a lot more developers, etc.

For example, when a open ChatGPT session expires and I get the silly chatbot message the system is overloaded to try later, I simply reload once or twice an “I’m in”.

On the API side, I never seen any errors.

This leads me to believe their many be some “geo-based” throttling possible, but of course, I have no way to confirm this, and I don’t access OpenAI via a VPN, TOR, etc. and I’m in a “supported” country.

On my end, all I have had to do was add a little sleep to my loops, and have never had an error since (which “someone” has totally berated me about, haha)

Sleep is good :slight_smile:

I didn’t mean to berate you, I only wanted to let it be known that adding a delay does not solve the issue… mainly because I don’t want the OpenAI staff to think this is a user error when it most certainly is not.

The only thing that works is adding ~10 retries to the curl function. 3-4/10 fail, but one eventually works.