GPT-4 and GPT-4o extremly slow

After the release of GPT-4o in the chat webapp, I noticed that it was really fast at the beginning. But now it is painfully slow, even GPT-4 is so slow right now that it can’t finish any response.

I had to refresh the page many times to get the full response. I’m guessing some issue with the streaming functionnality.

anyone faced same issue?

21 Likes

Have the same issue. When a conversation starts to get a bit long gpt4o starts to crash and starts working extremely slow.

7 Likes

I am having the same issues you described

6 Likes

I am seeing the same thing you are seeing on my end.

6 Likes

same – it’s painful! Do you think refreshing the conversation to remove the legacy text would help things? It’s pretty much unusable to me at this point.

5 Likes

How can we reach support? It’s not usable at all anymore…

4 Likes

same here… Im even struggling to get into the chat.openai page at all. Im sure their traffic is peaked out at the moment with the new model, but this is at the point that its unusable…

5 Likes

Yeah, it’s bad. So bad I found myself going to Claude today. :rofl:

5 Likes

Yeah same issue, almost feels like its throttling me for not being US based!

3 Likes

Super slow for me, and it’s also giving really dumb answers. For instance, I’m writing some unit tests that aren’t passing, and the code fix GPT 4o gave was to remove the comments… huh???

2 Likes

I’ve been experiencing noticeable delays with ChatGPT since the launch of version 4o. After hitting ‘Enter,’ it can sometimes take upwards of five seconds just to send a prompt. While I’m not a technical expert, it feels as though the influx of new users might be stretching the system’s bandwidth to its limits.

As a Premium subscriber, I expect a level of service that justifies the cost. It’s frustrating to face the same slowdowns that one might expect from a free service. Moreover, I’ve noticed a decline in the quality of responses lately, which is disappointing. The combination of slower response times and decreased quality is making me seriously reconsider my subscription.

I believe that dedicated bandwidth for paying customers should be a standard part of the service. Failing to do so not only diminishes the user experience but also feels somewhat unfair and dismissive of those who have invested in the platform. A better balance must be achieved to effectively accommodate both free and premium users.

10 Likes

Having the exact same lag issues. It’s beyond frustrating, especially considering I pay for GPT Plus. It’s taking at least 5 times the time for prompts, and quite often I have to refresh the page. And yes, I’ve cleared my cache…

1 Like

I’ve been monitoring this topic now for a few days as GPT has become slow, today it is unusable.

A relatively simple prompt took literally minutes to even begin a response and after several attempts of very, very slow response output the response ‘crashes’ and fails mid way through.

Because of this, I actually cannot use the system.

I’m a paid subscriber, and expect better. This is the first time I’ve seen this level of service degradation from any provider of anything I’ve ever paid for.

1 Like

litteraly its ruined its not working as before its litteraly worthles and not worth paying for a service that is continusly malfunction conversation not being completed ams error all that kind stuff i never experienced but now even when you tell dont code dont do this in clear prompts it litteraly replicates its last answer it gave its most annoying ai in open ai history

1 Like

Yes, when you send the prompt there is a sort of freeze of some seconds

1 Like

Same here in the USA. 30 seconds to process a prompt with only 3000 tokens.

This is really bad!

UPDATE: I tried going back to ‘gpt-3.5-turbo-0125’ but it had the same response time. This is either throttling as George hypothesized, or OpenAI’s servers need to be scaled up. Either way, it is not livable for a production app and if I need to recode for Claude or Gemini, I probably won’t go back.

2 Likes

Same here. Anyone have any ideas on how to fix it?

1 Like

Sadly so far the only fix seems to be to switch to Anthropic or Gemini 1.5 Pro, which is a really painful prospect when you have a lot of code dependent on the current service.

1 Like

Same behavior here. I was using GPT4 To edit VBA macros, and it would spit them out with the screen scrolling in real time so I could follow the code as it was generated. It was always stable and not necessarily fast, but not laggy either. Now the behavior every request is laggy, and after while the website needs to be reset sometimes repeatedly. I hope that they can resolve these issues or provide a fix. I am experiencing this problem on chrome has anyone had any better results on other browsers.

1 Like

Laggy on chrome and on duckduckgo. My impression was it might have actually been the webpage, and this was causing trouble with the streaming. Sometimes, for example, it won’t scroll, which seems like it can’t be a problem with the model itself. Fingers crossed they sort it…

1 Like