API Very Slow Since 2023-01-05

Since the beginning of this month, there has been many service disruptions, capacity problems and the response speed of the API has nearly 3x’d. I’m getting response times of over 30000ms whereas I used to get below 10000ms.

Does the OpenAI team have an eta? Many of us feel that these problems are due to ChatGPT. I firmly believe that API users should have priority when consuming compute resources.

I’d love to hear from the team.



This affects text-davinci-002 requests the most. I preferred using that model because it was faster than text-davinci-003.

1 Like

Same experience here, am using text-davinci-003 and the response is quite slow this might affect user experience on my application.


Same with our app and since we expose our API through the aws api gateway ( capped to 30 seconds ) we constantly get timeouts …
Is there an ETA to resolve this issue?