RateLimitError: The server had an error with no reason given

Same here Text-Davinci-003, error 429 Too many requests even if I try 1 request / minute

Same error here : openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

With the text-davinci-003 model…

I have been recently receiving the same error for text-davinci-003 so did a few test. Happy to provide trace logs if OAI team need / want.

According to the latest docs the rate limit for paid account after 48 hours:

  • 3,000 requests / minute
  • 250,000 davinci tokens / minute (and proportionally more for smaller models)

However, I am able to trigger a 429 error with close to 98% reliability by sending 3 or more concurrent requests to the API. It seems that regardless of the size of the requests as soon as 3 (sometimes 4) are being processed in parallel using the same API key the service locks up. So far I’ve tried it with the python SDK, Java and manually using CURL.

Ill continue testing with non parallel request.

1 Like

I have been getting the same error. Switching to text-davinci-002 makes it works normally.
I think there is some issue with text-davinci-003 model only.

I encountered this error with text-davinci-003 too. My usage is far lower than the rate limit.

For me I threw a time.sleep(3) in there so I do only 20 requests per minute max. The API limits are somewhere in the documentation, I think it’s 3k calls per minute if you have a credit card linked.

1 Like

I use a two second delay and it works good, but three seconds is even better if you are not in a hurry :slight_smile:

Thanks for being a good “API Netizen” and not hammering on the Beta API too much :slight_smile:

1 Like

The error was resolved, and as noted, it had nothing to do with delays :).

That’s a different error than the one being discussed here.

Yep, thats fair enough! Just thought it might be related. :slight_smile:

Is this resolved for you?
Also having the same problem today.

Yep, it’s back with a vengeance this morning. Some requests needing 15-18 retries.

1 Like

Nasty issue indeed. Here’s the way I try to mitigate it. Even if it should be fixed…

same, does someone know a channel where we can ping the staff

support@openai.com

Just encountered the same problem on a “pay-as-you-go” plan. I definitely didn’t hit any of the specified limits. Will try to contact the support.

Just had the same issue: [ERROR] RateLimitError: The server had an error while processing your request. Sorry about that!

I’m having the same error, and even though my keys are paid, I’m getting these random errors with the gpt models.

1 Like

It’s happening again. Now with the latest GPT-4 model.

I’ve encountered the same error with GPT4

1 Like