Just encountered the same problem on a “pay-as-you-go” plan. I definitely didn’t hit any of the specified limits. Will try to contact the support.
Just had the same issue: [ERROR] RateLimitError: The server had an error while processing your request. Sorry about that!
I’m having the same error, and even though my keys are paid, I’m getting these random errors with the gpt models.
1 Like
It’s happening again. Now with the latest GPT-4 model.
lilakk
37
I’ve encountered the same error with GPT4
1 Like
It’s happening now with GPT4. Can’t have complex interactions with it for long
Rate limits can happen when the API is overloaded
It doesn’t actually mean it has hit “your” rate limit. The message returned in the response is slightly different too
1 Like
tashku
40
I’m having the same issue with text-davinci-003 again, despite being on a pay-as-you-go 48H+ plan (it errors after ~3 requests of 200 text tokens each unless I use a wide delay of 120 seconds). has anyone encountered this issue again?
I’m a bit confused by this.
Are you sending parallel requests? Are you saying that after roughly 3 requests you are being timed out unless you wait 2 minutes to send the next request? On the last request that fails, are you being timed out or given a RateLimitError?
That’s really strange.
First, I love your profile picture. Spirited Away, right? I don’t usually watch animated movies, but man it was so powerful & moving.
Have you looked at your usage in the account overview? It may be possible that these requests are being sent multiple times?
Here’s a great OpenAI cookbook on managing RateLimitErrors. Although, it has been in the past that an unspecified RateLimitError was being thrown for a mishandled server error. Are you by any chance outside of North America?
1 Like
tashku
44
Hi, thank you. Kiki’s Delivery Service
a very touching film.
-
Yes – the Usage specifies just 5 requests sent within the last five minutes, totalling 1215 tokens. Is there a way to view more fine-grained information on the usage, in case repeat requests might be sending by accident?
-
Thank you, I took a look through this beforehand, unfortunately these strategies were unable to help with my case. (There is an update on this, I will note at the end of my response).
-
I am in North America.
Update: Even after waiting 1-2 minutes with no executions, even just attempting to send one request is met with a RateLimitError immediately. I generated a new API key and tried again and after about 9 requests spaced 2 minutes apart from each other, the mysterious RateLimitError returned. If this is a server issue, is there someway to notify OpenAI about it? Alternatively, any other ways to figure out more detailed usage information?
I think there’s a distinction between an app that genuinely hits a rate limit through excessive requests, and receiving a 429 because of an OpenAI server-side issue.
In the past Mods in Discord have acknowledged as an issue their side, and resolved. For 2 hours earlier, I made approximately 5 requests/minute, and approximately 2 of them returned 429.
tashku
46
Thank you. I’m still having the issue on my end (now attempting just one request hits a RateLimitError, including when attempting batching), but was wondering if the Discord link was available (I sent a help message to the Support team but no response yet).
Interesting.
@proxima You’re completely right. It’s always a good idea to knock-out any potential client-side issues beforehand though.
Although I am not running any requests right now, if this was a widespread issue the forums & discord are usually overwhelmed by spam.
Would you mind posting the request that you make?
Absolutely agree with validating client-side issues first. I think it’s cleared up now - thanks for your help.
Can anyone ping the staffs? I consistently got the error and have no idea, despite setting time sleep to 1 after every call. @RonaldGRuckus
raise self.handle_error_response(
openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!
Since yesterday: openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!
I am nowhere near any rate limit.
Same issue here when using text-davinci-003. We are nowhere near the rate limit for our account, but we get the rate limit error almost every request.
What’s your max token allotment? I changed mine from 3k to 2k and it seems to be working (after previously erroring).