RateLimitError: The server had an error with no reason given

I encountered this error with text-davinci-003 too. My usage is far lower than the rate limit.

For me I threw a time.sleep(3) in there so I do only 20 requests per minute max. The API limits are somewhere in the documentation, I think itā€™s 3k calls per minute if you have a credit card linked.

1 Like

I use a two second delay and it works good, but three seconds is even better if you are not in a hurry :slight_smile:

Thanks for being a good ā€œAPI Netizenā€ and not hammering on the Beta API too much :slight_smile:

1 Like

The error was resolved, and as noted, it had nothing to do with delays :).

Thatā€™s a different error than the one being discussed here.

Yep, thats fair enough! Just thought it might be related. :slight_smile:

Is this resolved for you?
Also having the same problem today.

Yep, itā€™s back with a vengeance this morning. Some requests needing 15-18 retries.

1 Like

Nasty issue indeed. Hereā€™s the way I try to mitigate it. Even if it should be fixedā€¦

same, does someone know a channel where we can ping the staff

support@openai.com

Just encountered the same problem on a ā€œpay-as-you-goā€ plan. I definitely didnā€™t hit any of the specified limits. Will try to contact the support.

Just had the same issue: [ERROR] RateLimitError: The server had an error while processing your request. Sorry about that!

Iā€™m having the same error, and even though my keys are paid, Iā€™m getting these random errors with the gpt models.

1 Like

Itā€™s happening again. Now with the latest GPT-4 model.

Iā€™ve encountered the same error with GPT4

1 Like

Itā€™s happening now with GPT4. Canā€™t have complex interactions with it for long

Rate limits can happen when the API is overloaded

It doesnā€™t actually mean it has hit ā€œyourā€ rate limit. The message returned in the response is slightly different too

1 Like

Iā€™m having the same issue with text-davinci-003 again, despite being on a pay-as-you-go 48H+ plan (it errors after ~3 requests of 200 text tokens each unless I use a wide delay of 120 seconds). has anyone encountered this issue again?

Iā€™m a bit confused by this.

Are you sending parallel requests? Are you saying that after roughly 3 requests you are being timed out unless you wait 2 minutes to send the next request? On the last request that fails, are you being timed out or given a RateLimitError?