RateLimitError: The server had an error with no reason given

Seems like the past hours the API is producing a generic rate limit error. I cannot find another reason but to assume the service is down or intermittent:

openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

My traffic is very low, just a few requests and everything worked last week.

Using python 3.10 python openapi client 0.26.4 I’m located in EEST (+2 GMT)

Anyone else seeing this?

8 Likes

I have the same problem with text-davinci-003 model
Though text-curie-001 works fine

2 Likes

Thanks @belkin, I forgot to mention the model. Yes it is text-davinci-003 for me too.
I wonder about the extent of the problem, I’ve been sitting on this error all day. Should expect some information on it or just wait it out?

1 Like

I also got a number of rate limit errors on the paid plan, and we have been using it for a while, so according to the documentation, we should be within the usage limits.

The first instance triggered after the third request in one minute (there were 2 additional requests before this, but those were 20 seconds before). Then again after three requests in one minute, and another with three requests in one minute.

Happy to provide trace logs and tokens to debug on OpenAI’s end.

My first rate limited error happened at 2023-02-06 03:45:44.779 UTC-05:00, my second at 2023-02-06 03:47:41.354 UTC-05:00 (with only six requests between those two). This is using text-davinci-003 and the size of the prompt was about 30-40 tokens.

1 Like

Thanks @miguel.de.icaza . I’m on the paid plan as well.

This has been happening consistently for me today, all day. The error is a server error, I do not think it really has anything to do with the actual rate limit. At least this is how I read it. What do you think?

3 Likes

Same here, 429s when way, way under the rate limit. Also saw some towards the end of Jan. All with text-davinci-003. Created a support ticket but response so far.

3 Likes

here we go again: The server had an error while processing your request. Sorry about that! /with davinci003

Same problem now with davinci-003

Text-davinci-003 is still having this issue with no resolve.

RateLimitError: The server had an error while processing your request. Sorry about that!
still not working

I started having this error on my free trial. I shifted to paid plan and the issue persists… Normally whenever something similar occurs, I just add a timer before every call (1-2 seconds), but this time it is just not working on my end.
I’m using Text-Davinci-003

Even I encounter the same error.I am using pay as you go billing option for OpenAI. I persistently encounter openai.error.RateLimitError while experimenting with model=“text-davinci-003”.

I tried creating a new API token Key and it worked for the first time. But on the second request the openai.error.RateLimitError error occurred again. This had not been the case earlier.

Same error for us, 429 with the text “The server had an error while processing your request. Sorry about that!” - no response from OpenAI on their Messenger yet

Super annoying as we’re just about to launch our product and now can’t! Terrible timing!

Im encountering the same problem.

Also with Text-Davinci-003.

openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

Are people also having this problem with other bots?

2 Likes

I’m also running into this error using Text-Davinci-003.

2 Likes

Hey,

I just got a response in the discord channel from a couple of OpenAI staff members:

They have not suggested something yet and I redirected them to this forum thread as well.

1 Like

Same here Text-Davinci-003, error 429 Too many requests even if I try 1 request / minute

Same error here : openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!

With the text-davinci-003 model…

I have been recently receiving the same error for text-davinci-003 so did a few test. Happy to provide trace logs if OAI team need / want.

According to the latest docs the rate limit for paid account after 48 hours:

  • 3,000 requests / minute
  • 250,000 davinci tokens / minute (and proportionally more for smaller models)

However, I am able to trigger a 429 error with close to 98% reliability by sending 3 or more concurrent requests to the API. It seems that regardless of the size of the requests as soon as 3 (sometimes 4) are being processed in parallel using the same API key the service locks up. So far I’ve tried it with the python SDK, Java and manually using CURL.

Ill continue testing with non parallel request.

1 Like

I have been getting the same error. Switching to text-davinci-002 makes it works normally.
I think there is some issue with text-davinci-003 model only.