Since the beginning of this month, there has been many service disruptions, capacity problems and the response speed of the API has nearly 3x’d. I’m getting response times of over 30000ms whereas I used to get below 10000ms.
Does the OpenAI team have an eta? Many of us feel that these problems are due to ChatGPT. I firmly believe that API users should have priority when consuming compute resources.
Same with our app and since we expose our API through the aws api gateway ( capped to 30 seconds ) we constantly get timeouts …
Is there an ETA to resolve this issue?