429 error on gpt-3.5-turbo model with paid account

We are querying gpt-turbo-3.5 model via OPENAI node js API. We are making calls one by one continuously. but we haven’t reached any limits mentioned https://help.openai.com/en/articles/7039783-chatgpt-api-faq .
But We are getting 429 error sometimes.

  1. Out of 1300 requests made, 130 requests we got 429 erros. Avg prompt token is 350
  2. Also it is taking long time to respond, What can be the average response time ? my avg token for prompt is 350
1 Like

We are also receiving very high levels of 429 errors at the moment as of a couple of hours ago.

  • As a side note we are also very getting very slow to responses when we do get them. +20 seconds to reply to a single prompt of hello.
  • Around 20% of requests are returning server_errorThat model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID ****** in your message.)

Happy to provide ID’s if the OpenAI team need.

2 Likes

Having this same experience today.

2 Likes

Yes this seems to be an ongoing issue. We see it every few days on and off with most of the models.

2 Likes

Indeed, the API is so slow that it’s now practically unusable. I constantly encounter this error message:

That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists.

I’m facing the same issue. The thing is, we are not sending too many requests :neutral_face: we are still in the development stage. :face_with_raised_eyebrow: