API Very Slow Since 2023-01-05

Since the beginning of this month, there has been many service disruptions, capacity problems and the response speed of the API has nearly 3x’d. I’m getting response times of over 30000ms whereas I used to get below 10000ms.

Does the OpenAI team have an eta? Many of us feel that these problems are due to ChatGPT. I firmly believe that API users should have priority when consuming compute resources.

I’d love to hear from the team.



This affects text-davinci-002 requests the most. I preferred using that model because it was faster than text-davinci-003.

1 Like

Same experience here, am using text-davinci-003 and the response is quite slow this might affect user experience on my application.


Same with our app and since we expose our API through the aws api gateway ( capped to 30 seconds ) we constantly get timeouts …
Is there an ETA to resolve this issue?

Yep same here. I would guess that they will be considering infrastructure upgrades to deal with the high boom they have seen in the past couple of months with APIs and applications making it a core function of there product. I would bet that after Christmas we will see improvements as they may consider using the slow period and with majority of enterprises going into there change freezes they would have the perfect opportunity to do this.

Nine months later, on a topic that started before chat models existed, is not “same here”.