Even I encounter the same error.I am using pay as you go billing option for OpenAI. I persistently encounter openai.error.RateLimitError while experimenting with model=“text-davinci-003”.
I tried creating a new API token Key and it worked for the first time. But on the second request the openai.error.RateLimitError error occurred again. This had not been the case earlier.
Same error for us, 429 with the text “The server had an error while processing your request. Sorry about that!” - no response from OpenAI on their Messenger yet
Super annoying as we’re just about to launch our product and now can’t! Terrible timing!
I have been recently receiving the same error for text-davinci-003 so did a few test. Happy to provide trace logs if OAI team need / want.
According to the latest docs the rate limit for paid account after 48 hours:
3,000 requests / minute
250,000 davinci tokens / minute (and proportionally more for smaller models)
However, I am able to trigger a 429 error with close to 98% reliability by sending 3 or more concurrent requests to the API. It seems that regardless of the size of the requests as soon as 3 (sometimes 4) are being processed in parallel using the same API key the service locks up. So far I’ve tried it with the python SDK, Java and manually using CURL.
I have been getting the same error. Switching to text-davinci-002 makes it works normally.
I think there is some issue with text-davinci-003 model only.
For me I threw a time.sleep(3) in there so I do only 20 requests per minute max. The API limits are somewhere in the documentation, I think it’s 3k calls per minute if you have a credit card linked.