Seems like the past hours the API is producing a generic rate limit error. I cannot find another reason but to assume the service is down or intermittent:
openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!
My traffic is very low, just a few requests and everything worked last week.
Using python 3.10 python openapi client 0.26.4 I’m located in EEST (+2 GMT)
Thanks @belkin, I forgot to mention the model. Yes it is text-davinci-003 for me too.
I wonder about the extent of the problem, I’ve been sitting on this error all day. Should expect some information on it or just wait it out?
I also got a number of rate limit errors on the paid plan, and we have been using it for a while, so according to the documentation, we should be within the usage limits.
The first instance triggered after the third request in one minute (there were 2 additional requests before this, but those were 20 seconds before). Then again after three requests in one minute, and another with three requests in one minute.
Happy to provide trace logs and tokens to debug on OpenAI’s end.
My first rate limited error happened at 2023-02-06 03:45:44.779 UTC-05:00, my second at 2023-02-06 03:47:41.354 UTC-05:00 (with only six requests between those two). This is using text-davinci-003 and the size of the prompt was about 30-40 tokens.
This has been happening consistently for me today, all day. The error is a server error, I do not think it really has anything to do with the actual rate limit. At least this is how I read it. What do you think?
Same here, 429s when way, way under the rate limit. Also saw some towards the end of Jan. All with text-davinci-003. Created a support ticket but response so far.
I started having this error on my free trial. I shifted to paid plan and the issue persists… Normally whenever something similar occurs, I just add a timer before every call (1-2 seconds), but this time it is just not working on my end.
I’m using Text-Davinci-003
Even I encounter the same error.I am using pay as you go billing option for OpenAI. I persistently encounter openai.error.RateLimitError while experimenting with model=“text-davinci-003”.
I tried creating a new API token Key and it worked for the first time. But on the second request the openai.error.RateLimitError error occurred again. This had not been the case earlier.
Same error for us, 429 with the text “The server had an error while processing your request. Sorry about that!” - no response from OpenAI on their Messenger yet
Super annoying as we’re just about to launch our product and now can’t! Terrible timing!
I have been recently receiving the same error for text-davinci-003 so did a few test. Happy to provide trace logs if OAI team need / want.
According to the latest docs the rate limit for paid account after 48 hours:
3,000 requests / minute
250,000 davinci tokens / minute (and proportionally more for smaller models)
However, I am able to trigger a 429 error with close to 98% reliability by sending 3 or more concurrent requests to the API. It seems that regardless of the size of the requests as soon as 3 (sometimes 4) are being processed in parallel using the same API key the service locks up. So far I’ve tried it with the python SDK, Java and manually using CURL.
I have been getting the same error. Switching to text-davinci-002 makes it works normally.
I think there is some issue with text-davinci-003 model only.