We are querying gpt-turbo-3.5 model via OPENAI node js API. We are making calls one by one continuously. but we haven’t reached any limits mentioned https://help.openai.com/en/articles/7039783-chatgpt-api-faq .
But We are getting 429 error sometimes.
- Out of 1300 requests made, 130 requests we got 429 erros. Avg prompt token is 350
- Also it is taking long time to respond, What can be the average response time ? my avg token for prompt is 350
We are also receiving very high levels of 429 errors at the moment as of a couple of hours ago.
- As a side note we are also very getting very slow to responses when we do get them. +20 seconds to reply to a single prompt of
- Around 20% of requests are returning
That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID ****** in your message.)
Happy to provide ID’s if the OpenAI team need.
Having this same experience today.
Yes this seems to be an ongoing issue. We see it every few days on and off with most of the models.
Indeed, the API is so slow that it’s now practically unusable. I constantly encounter this error message:
That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists.
I’m facing the same issue. The thing is, we are not sending too many requests we are still in the development stage.