I’m a Plus user, and it’s frustrating when GPT is sometimes very slow. No, it has nothing to do with me, neither with my connection nor anything else. That can be completely ruled out, as it sometimes works normally, but more and more often it’s very, very slow.
When you have to wait a minute for even the simplest questions with the simplest models before anything happens, and then somewhere along the line, the fact that the Open AI status page mostly shows that everything is fine, even though the speeds are very slow.
I’m also a little frustrated right now, because I have the feeling that whenever I want to do something with GPT, and it’s sometimes urgent, it just doesn’t work in that moment.
Used to be that a multi-page answer appeared within seconds. Now it takes 10 - 30 seconds to start and then I can read along as the text is being generated.
Instead of focusing on performance, more and more new things are being added, which slows it down even further. Yes, it’s great that image generation now works much better, but it’s completely unusable. Who has time to wait minutes for an image when Flux and the similars do it in seconds?
Yes, I know it’s overcrowded at the moment, but not even the chat works well anymore, it’s just slow. As a paying customer, one should be able to expect a little more here? But nobody cares… anyway, I just wanted to mention it
Something that was impossible a year ago is now very possible - but you have to wait a few minutes for it- temporarily. Is that really such a big deal?
I’ve now switched to Gemini, too. It’s much faster, and they also have a very good archiving system for chats (something after the community has been “begging” for years, to this day nothing has happened) , which I can file and sort as I want. And let’s be honest, 1 million tokens…
I agree. I am a plus user too, and its usually pretty quick, but the last 5 days or so, its been painfully slow and even asking to “retry” quite a bit. Then it spit out some weird jumbled responses. It corrected when I retried it. but this is not worth the monthly premium.
same here on april 30. chatgpt 4o wanted to send me a template project for Metashape to help me on a project (.psx file size like 24Ko) , finally it could not send it directly, so offered me to send it by wetransfer or on my server, but it’s not allowed.. then proposed me to send a codebase64 file to unrar, but finally couldnt send the full code at once.. so finally offered me to send the code splitted in 3 parts, to decode after. Everytime insanely slow to send each part and everytime the file was corrupted and I’ve tried every way.. In total i’ve lost like 2/3 hours .. On a way, good to know that human can be more helpfull and faster sometimes. So i did the project by myself instead, was quicker! ^^ but yes indeed, i don’t see the interest to be a Plus user for these concerns..tbc
I am indeed a Plus user in Brazil, and I am facing serious issues with slow response times—it’s absurdly slow to generate even simple and basic responses in the 4o model. It has become more flattering, always wanting to do more things, and the quality of the responses has decreased, including their length. Now, when I ask for a text modification, it only responds with a part of it, and I have to ask again for the complete version. The slowness is incredible; I don’t know what happened to the performance—it has regressed.