Error: status code 429, Too Many Requests

I am on plan of Pay as you go.
I double checked that my limits are not exceeded.
System that I implemented is to summarise a text and then get keywords from the response.
I am using model text-davinci-003
Maybe this model is overloaded right now but from last 2 days it was working fine.
now it’s giving me this error. how solve this?


I’m having the same issue…429 Too Many Requests… account limits not exceeded.

1 Like

I have the same issue, and others are reporting it here, too: Continuous gpt3 api 500 error: The server had an error while processing your request. Sorry about that! - #14 by rankoneads

This is the error response message which indicates it’s a server-side issue:

   "message": "The server had an error while processing your request. Sorry about that!",
   "type": "server_error"

I reached out to support an hour ago but no response so far.

1 Like

@tomashby @suzan
Hope so to get some feedback from Open AI team.
I am having seconds thoughts about my product, totally depending on Open AI and continuous down time. I don’t know what else I am going to do

1 Like

Likewise. I’m way, way off the rate limit, and yet still experiencing problems.

I occasionally get 500s, too, but recently 429s have been way more problematic, along with incredibly slow response times (some over 60s).


I have the same issue, getting sporadically getting 429s since a few days. Seems to be getting more frequent too. Usage limit is fine, and I didn’t change the length or frequency of my requests.

Happened during a demo for a VC … not ideal. Hope it gets fixed soon.

1 Like

same! are you still getting the error?

I read in the discord group that they are now looking into it. They also send out an email for those who subscribe to the status page

It’s taken a while, I think they are onto it now


Nice, finally. Hopefully, it won’t keep being like this.

Yep, was acknowledged, and since marked as resolved. Fingers crossed!

1 Like

Still not working for me. Nice"

All network based services can have delays, errors, congestion, degraded performance etc.

When writing an application using an API, you should always trap errors by parsing the response from the API and taking action based on any error message.

In the case of these “server busy-ish” types of error messages, you can trap this error, back off a number of randomly generated seconds, and retransmit your request,

You can then have a default custom error message after some configurable number or retry attempts or time, or both.

You can also automatically send email to OpenAI support when this happens, with details so they will know you are having issues :+1:


Regarding trapping the error Ruby, I am doing this now.

I call the API… and if I get a 500 or 429… call it again in a few seconds…

That being said… it slows down the user experience to have to wait through multiple GPT API fails. Sometimes I get it failing three or four times in a row. That’s a long time for the users to sit there and twiddle their thumbs.

I am worried that the team at Openai is not doing proper testing to allow maybe one fifth of all of their API calls to fail with 429 or 500.

I wouldn’t want to inundate support with automatic emails… but if that’s your suggestion I am seriously considering it. The problem is my user base is farily large, so you guys would get flooded :\

1 Like

I hate to see

“This incident has been resolved”

when clearly it is not yet resolved.


Hello. I am having the same issue. Is the API working for anyone ?

1 Like

Having the same issue since about 3PM EST, right around the time today’s outage started. The API status page ( shows all systems being green, yet the 429 errors continue.