OpenAI Why Are The API Calls So Slow? When will it be fixed?

I have noticed outages across an enterprise environment and an individual environment. I don’t think the problem is what everyone is currently thinking. There are certain parts of the day it is better and others it’s not. I’m in Australia so it works well during the day but at night when the u.a market comes online, than I can see times up to 4 / 5 minutes.

In the enterprise environment, the regular chatgpt model has been playing up without reporting the incident on the status page but with a test pool of 5000+ it’s easily identified. The errors we see is something along the lines of currently unavailable please try again in a couple of mins. The issue is is intermittent and is not all users at once.

With the open ai models, yes if you sacrifice quality for newer tech and better responses, then you will see very short response times. The higher models will do as previously stated, 4/5 minutes. It’s got gradually worse over the last couple of months with both chatgpt and api responses. I received emails this last week from google play console and apple and both are stating they have seen a huge spike in api calls and applications using AI. They are also advising of stricter requirements and a 30 day response in order to comply.

The reason I have given the above information, is because with all the above considered, one would think that openAI is struggling with the huge boom that they are seeing and I think in the new year we will see massive improvements to the latency due to the huge spike in new applications hitting the market and requiring quicker speeds. I have found that training your own model off the 3.5 gpt model can help with response times but we are still subject to the highly growing demand for this service. Also, I am doing a direct post bypassing servers.