ChatGPT 4o painfully slow, is it just me?

Hi Folks, is it just for me or is ChatGPT 4 and 4o exceptionally slow? No one seems to mention it but the Mac app and web are almost unable for me. (Paid user)

Any ideas?

5 Likes

For me it may be worse, and I am actively trying to use it for things I am trying to do. I have had to simply abort a bunch of times even to the point of giving up, shutting down and logging back in.
It is also behaving erratically. I asked it to provide a prompt to create an image for an article I was using it to develop. I asked it to do that and then use the prompt to generate an appropriate image. It generated an image that was almost unrelated and afterward, instead of giving me a rational prompt it proceeded to reprint the article, so slowly I wondered if it was broken.

1 Like

I thought it was just me but this is horribly slow. It’s smart but turtle-like in rendering the response

1 Like

same, and it’s even occasionally come with error, which wasted the remaining usage I have to use 4o, it is frustrating really

Hello, I am glad that I was not the only one experiencing this. As a daily user, I can say that its performance has degraded. I am a free user as I use it just to bounce off ideas, but nowadays, even doing that is hard. The default model runs smoothly for me though. But 4o takes much more time to generate a single response.

1 Like

Same here, it does not happen always though. I currently have around 1.5 to 2 tokens per seconds. What is going on? (Chatgpt App in mac, Model 4o)

same for those who paid (windows dell)

1 Like

I just mentioned this in another post, but I think it’s important enough to bring up again here. Since the release of ChatGPT version 4o, there’s been a noticeable lag in response times—it sometimes takes over five seconds just to send a prompt after pressing ‘Enter.’ This delay is something I wouldn’t expect as a Premium subscriber, where quick and efficient service should be a given.

Beyond just the increased wait times, I’ve also observed a dip in the quality of responses. This combination of issues has me seriously considering whether to continue my subscription. It seems as though the system is overwhelmed by the volume of new users, and this is impacting not just speed but also the quality of interactions. It’s disappointing, particularly because one of the main reasons I opted for Premium was to avoid these kinds of issues.

I believe it’s reasonable to expect that part of our subscription fees should ensure a certain amount of dedicated bandwidth for paying customers. If these problems persist, it might just be the push I need to look for alternatives that can offer consistent quality and responsiveness.

Perhaps the issue lies with the Mac app rather than GPT-4o. It generates responses quickly on the web, but its performance on the Mac app may give the impression of slower generation. When the conversation gets lengthy, the smoothness of the Mac app significantly decreases, and the app occupies an entire CPU core. This might require the OpenAI team to optimize their Mac app.