RateLimitError: The server had an error with no reason given

RateLimitError: The server had an error while processing your request. Sorry about that!
still not working

I started having this error on my free trial. I shifted to paid plan and the issue persists… Normally whenever something similar occurs, I just add a timer before every call (1-2 seconds), but this time it is just not working on my end.
I’m using Text-Davinci-003

Even I encounter the same error.I am using pay as you go billing option for OpenAI. I persistently encounter openai.error.RateLimitError while experimenting with model=“text-davinci-003”.

I tried creating a new API token Key and it worked for the first time. But on the second request the openai.error.RateLimitError error occurred again. This had not been the case earlier.

Same error for us, 429 with the text “The server had an error while processing your request. Sorry about that!” - no response from OpenAI on their Messenger yet

Super annoying as we’re just about to launch our product and now can’t! Terrible timing!

Im encountering the same problem.

Also with Text-Davinci-003.

openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

Are people also having this problem with other bots?

2 Likes

I’m also running into this error using Text-Davinci-003.

2 Likes

Hey,

I just got a response in the discord channel from a couple of OpenAI staff members:

They have not suggested something yet and I redirected them to this forum thread as well.

1 Like

Same here Text-Davinci-003, error 429 Too many requests even if I try 1 request / minute

Same error here : openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

With the text-davinci-003 model…

I have been recently receiving the same error for text-davinci-003 so did a few test. Happy to provide trace logs if OAI team need / want.

According to the latest docs the rate limit for paid account after 48 hours:

  • 3,000 requests / minute
  • 250,000 davinci tokens / minute (and proportionally more for smaller models)

However, I am able to trigger a 429 error with close to 98% reliability by sending 3 or more concurrent requests to the API. It seems that regardless of the size of the requests as soon as 3 (sometimes 4) are being processed in parallel using the same API key the service locks up. So far I’ve tried it with the python SDK, Java and manually using CURL.

Ill continue testing with non parallel request.

1 Like

I have been getting the same error. Switching to text-davinci-002 makes it works normally.
I think there is some issue with text-davinci-003 model only.

I encountered this error with text-davinci-003 too. My usage is far lower than the rate limit.

For me I threw a time.sleep(3) in there so I do only 20 requests per minute max. The API limits are somewhere in the documentation, I think it’s 3k calls per minute if you have a credit card linked.

1 Like

I use a two second delay and it works good, but three seconds is even better if you are not in a hurry :slight_smile:

Thanks for being a good “API Netizen” and not hammering on the Beta API too much :slight_smile:

1 Like

The error was resolved, and as noted, it had nothing to do with delays :).

That’s a different error than the one being discussed here.

Yep, thats fair enough! Just thought it might be related. :slight_smile:

Is this resolved for you?
Also having the same problem today.

Yep, it’s back with a vengeance this morning. Some requests needing 15-18 retries.

1 Like

Nasty issue indeed. Here’s the way I try to mitigate it. Even if it should be fixed…