- Just one request at a time.
- After around 3 requests, I hit an unspecified, generic RateLimitError. I can bypass this if I put a delay of around at least a minute (inconsistently, but closer to 2 minutes) between individual, unbatched requests.
Thatās really strange.
First, I love your profile picture. Spirited Away, right? I donāt usually watch animated movies, but man it was so powerful & moving.
Have you looked at your usage in the account overview? It may be possible that these requests are being sent multiple times?
Hereās a great OpenAI cookbook on managing RateLimitErrors. Although, it has been in the past that an unspecified RateLimitError was being thrown for a mishandled server error. Are you by any chance outside of North America?
Hi, thank you. Kikiās Delivery Service a very touching film.
-
Yes ā the Usage specifies just 5 requests sent within the last five minutes, totalling 1215 tokens. Is there a way to view more fine-grained information on the usage, in case repeat requests might be sending by accident?
-
Thank you, I took a look through this beforehand, unfortunately these strategies were unable to help with my case. (There is an update on this, I will note at the end of my response).
-
I am in North America.
Update: Even after waiting 1-2 minutes with no executions, even just attempting to send one request is met with a RateLimitError immediately. I generated a new API key and tried again and after about 9 requests spaced 2 minutes apart from each other, the mysterious RateLimitError returned. If this is a server issue, is there someway to notify OpenAI about it? Alternatively, any other ways to figure out more detailed usage information?
I think thereās a distinction between an app that genuinely hits a rate limit through excessive requests, and receiving a 429 because of an OpenAI server-side issue.
In the past Mods in Discord have acknowledged as an issue their side, and resolved. For 2 hours earlier, I made approximately 5 requests/minute, and approximately 2 of them returned 429.
Thank you. Iām still having the issue on my end (now attempting just one request hits a RateLimitError, including when attempting batching), but was wondering if the Discord link was available (I sent a help message to the Support team but no response yet).
Interesting.
@proxima Youāre completely right. Itās always a good idea to knock-out any potential client-side issues beforehand though.
Although I am not running any requests right now, if this was a widespread issue the forums & discord are usually overwhelmed by spam.
Would you mind posting the request that you make?
Absolutely agree with validating client-side issues first. I think itās cleared up now - thanks for your help.
Can anyone ping the staffs? I consistently got the error and have no idea, despite setting time sleep to 1 after every call. @anon10827405
raise self.handle_error_response(
openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!
Since yesterday: openai.error.RateLimitError: The server had an error while processing your request. Sorry about that!
I am nowhere near any rate limit.
Same issue here when using text-davinci-003. We are nowhere near the rate limit for our account, but we get the rate limit error almost every request.
Whatās your max token allotment? I changed mine from 3k to 2k and it seems to be working (after previously erroring).
I havenāt made any request, and can confirm via my dashboard, but Iām still getting a RateLimitError. So I guess the rate limit must be 0 lol. My app makes one then sleeps 60 seconds between requests and has been sitting for almost 30 minutes getting the rate error. Iāve tried multiple different models, and itās the same thing. Any help is appreciated
Welcome to the developer forum!
Have you added a billing method to your OpenAI API account? If not, that will be why you are seeing the rate limit error at 0 usage. You can add a billing method here
Thank you! I have not, I was planning on testing my app on at least a 1 call before paying. I thought there were monthly free credits allotted to free user? It looks like mine expired July 1st, but I only signed up today?
Youāre welcome! The Free token grant starts from when you open your OpenAI account, not from the date you register a payment method.
Thereās no monthly free credits. Thereās supposedly one-time $5 free credits when you sign up, but many people miss out on them. I believe you get the credits the first time you use your phone number with OpenAI and that may include signing up for ChatGPT. Regardless, if theyāre expired you can try contacting support but that may take awhile, so if you want to get started with API youāll need to provide payment info.
Thank you both! I guess there lies the issue.
It is close to unusable much of the the time. I get ratelimit on most of the requests that I send, even it is a simple single senctence response. It is not production ready.
Summary created by AI.
Users in the discussion titled āRateLimitError: The server had an error with no reason givenā are experiencing a rate limit error while using the OpenAI API, particularly with the text-davinci-003
model.
Starting with petroslamb, users report a general error that canāt be resolved due to their low traffic and subsequent requests under the rate limit belkin also faces the issue with text-davinci-003
, but not with text-curie-001
. miguel.de.icaza furthermore reports recurring errors on the paid plan even when usage limits are obeyed according to OpenAIās documentation.
Other users like proxima, ahmdayyy, nayakpplaban, M.H1, dpc and arsaltanveer72, continue to report encountering the same error while using text-davinci-003
.
info27 did a few tests on the error and discovered that it occurs with 98% reliability by sending 3 or more concurrent requests to the API. This happens regardless of the request size and seems to lock up the service.
Several users show willingness to aid OpenAI debugging the issue by providing trace logs and tokens, and they are awaiting further instructions or fixes.
Summarized with AI on Dec 2 2023
AI used: gpt-4-32k