I encountered this error with text-davinci-003 too. My usage is far lower than the rate limit.
For me I threw a time.sleep(3) in there so I do only 20 requests per minute max. The API limits are somewhere in the documentation, I think itās 3k calls per minute if you have a credit card linked.
I use a two second delay and it works good, but three seconds is even better if you are not in a hurry
Thanks for being a good āAPI Netizenā and not hammering on the Beta API too much
The error was resolved, and as noted, it had nothing to do with delays :).
Thatās a different error than the one being discussed here.
Yep, thats fair enough! Just thought it might be related.
Is this resolved for you?
Also having the same problem today.
Yep, itās back with a vengeance this morning. Some requests needing 15-18 retries.
same, does someone know a channel where we can ping the staff
support@openai.com
Just encountered the same problem on a āpay-as-you-goā plan. I definitely didnāt hit any of the specified limits. Will try to contact the support.
Just had the same issue: [ERROR] RateLimitError: The server had an error while processing your request. Sorry about that!
Iām having the same error, and even though my keys are paid, Iām getting these random errors with the gpt models.
Itās happening again. Now with the latest GPT-4 model.
Iāve encountered the same error with GPT4
Itās happening now with GPT4. Canāt have complex interactions with it for long
Rate limits can happen when the API is overloaded
It doesnāt actually mean it has hit āyourā rate limit. The message returned in the response is slightly different too
Iām having the same issue with text-davinci-003
again, despite being on a pay-as-you-go 48H+ plan (it errors after ~3 requests of 200 text tokens each unless I use a wide delay of 120 seconds). has anyone encountered this issue again?
Iām a bit confused by this.
Are you sending parallel requests? Are you saying that after roughly 3 requests you are being timed out unless you wait 2 minutes to send the next request? On the last request that fails, are you being timed out or given a RateLimitError?