Nasty issue indeed. Here’s the way I try to mitigate it. Even if it should be fixed…

same, does someone know a channel where we can ping the staff

support@openai.com

Just encountered the same problem on a “pay-as-you-go” plan. I definitely didn’t hit any of the specified limits. Will try to contact the support.

Just had the same issue: [ERROR] RateLimitError: The server had an error while processing your request. Sorry about that!

I’m having the same error, and even though my keys are paid, I’m getting these random errors with the gpt models.

1 Like

It’s happening again. Now with the latest GPT-4 model.

I’ve encountered the same error with GPT4

1 Like

It’s happening now with GPT4. Can’t have complex interactions with it for long

Rate limits can happen when the API is overloaded

It doesn’t actually mean it has hit “your” rate limit. The message returned in the response is slightly different too

1 Like

I’m having the same issue with text-davinci-003 again, despite being on a pay-as-you-go 48H+ plan (it errors after ~3 requests of 200 text tokens each unless I use a wide delay of 120 seconds). has anyone encountered this issue again?

I’m a bit confused by this.

Are you sending parallel requests? Are you saying that after roughly 3 requests you are being timed out unless you wait 2 minutes to send the next request? On the last request that fails, are you being timed out or given a RateLimitError?

  1. Just one request at a time.
  2. After around 3 requests, I hit an unspecified, generic RateLimitError. I can bypass this if I put a delay of around at least a minute (inconsistently, but closer to 2 minutes) between individual, unbatched requests.

That’s really strange.

First, I love your profile picture. Spirited Away, right? I don’t usually watch animated movies, but man it was so powerful & moving.

Have you looked at your usage in the account overview? It may be possible that these requests are being sent multiple times?

Here’s a great OpenAI cookbook on managing RateLimitErrors. Although, it has been in the past that an unspecified RateLimitError was being thrown for a mishandled server error. Are you by any chance outside of North America?

1 Like

Hi, thank you. Kiki’s Delivery Service :slight_smile: a very touching film.

  1. Yes – the Usage specifies just 5 requests sent within the last five minutes, totalling 1215 tokens. Is there a way to view more fine-grained information on the usage, in case repeat requests might be sending by accident?

  2. Thank you, I took a look through this beforehand, unfortunately these strategies were unable to help with my case. (There is an update on this, I will note at the end of my response).

  3. I am in North America.

Update: Even after waiting 1-2 minutes with no executions, even just attempting to send one request is met with a RateLimitError immediately. I generated a new API key and tried again and after about 9 requests spaced 2 minutes apart from each other, the mysterious RateLimitError returned. If this is a server issue, is there someway to notify OpenAI about it? Alternatively, any other ways to figure out more detailed usage information?

I think there’s a distinction between an app that genuinely hits a rate limit through excessive requests, and receiving a 429 because of an OpenAI server-side issue.

In the past Mods in Discord have acknowledged as an issue their side, and resolved. For 2 hours earlier, I made approximately 5 requests/minute, and approximately 2 of them returned 429.

Thank you. I’m still having the issue on my end (now attempting just one request hits a RateLimitError, including when attempting batching), but was wondering if the Discord link was available (I sent a help message to the Support team but no response yet).

Interesting.

@proxima You’re completely right. It’s always a good idea to knock-out any potential client-side issues beforehand though.

Although I am not running any requests right now, if this was a widespread issue the forums & discord are usually overwhelmed by spam.

Would you mind posting the request that you make?

Absolutely agree with validating client-side issues first. I think it’s cleared up now - thanks for your help.

Can anyone ping the staffs? I consistently got the error and have no idea, despite setting time sleep to 1 after every call. @RonaldGRuckus

raise self.handle_error_response(
openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!