Gpt4o has become unusable

Over the past few weeks ChatGPT4o’s responses have become so thoroughly disconnected from the content of my input it can’t function. With every iteration of a code base it removes details that it shouldn’t, ignores requests, dismisses questions, or is otherwise one to two steps behind.

If I tell it a problem is solved, that same problem shows up in it’s next set of things it’s trying to fix. If I ask it to fix something, maybe it will fix that, but it is removing or breaking something else unrelated and untouched by the component it’s “fixing.”

It loses focus of my projects constantly. It definitely remembers all the details when I ask it, but it will not, under any circumstance or prompt, use all the details when it writes a segment of code.

It is TOO eager to string text together. Yes or no questions are 500+ word essays plus a full copy of whatever code you’ve been working on complete with hallucinations, removal of critical components and other unpleasantness, punctuated by notes that describe what it was supposed to do, but the notes and the code don’t match.

I observe a brief moment of lucidity when starting a new conversation but it is lost by the time we are back up to speed to get anything done.

ChatGPT is miserable to use right now to the point of being counterproductive.


Welcome to the community!

GPT-4o is OpenAI’s weakest gpt-4 model! It’s very conversational, but probably not your best option as your daily co-pilot.

By the way, did you know that OpenAI is going to finally memory-hole its most powerful (also most expensive) model to date next month? It hasn’t been available as a ChatGPT model for a long while and not many people still have access to it.

I suggest you use GPT-4 turbo (marked as GPT-4 in ChatGPT) while you still can!

Thank you for the reply. It was a nice thought, but GPT(blank - turbo is not selectable) is not functioning any better right now. My complaint and feedback stand. And to be clear, the openai family of chatbots was usable until recently.


I’m finding the same when using it for literary purposes. It often tacks on large amounts of information from previous inputs instead of the actual input it is replying to.

I also receive

Something went wrong while generating the response. If this issue persists please contact us through our help center at

frequently, and it has already bricked chats I am using completely.

That’s before you take into account the incredible sluggish response speed, often freezing the chat window. Or the fact it often fails to remember simple details, like using British English rather than American.

While GPT4 had issues, it wasn’t perfect, GPT4o is terrible.



A complete s**t show and waste of time and money.

It is unusable. Earlier it was not able to read the content of a js file with 10 lines of code. 10 lines…not 10k lines. 10 lines!!
It read out half of it, repeatedly. I started a new chat, nothing. I changed and then removed my instructions, and still.

It has its moments. But usually it is more of an endless frustration.
Especially since it is usually not working in the evening hours. Completely overloaded. Despite paying the plus subscription.

No matter what you try, translations, proofreading, coding. You name it. It’s completely lobotomized.

I will move to Claude. Even their free version is better, faster and more reliable.


Its unusable, @openai please fixt it, its really impossible to get any value from , infact it is wasting more time of mine than making me productive


Same here, since the release of GPT 4o in May, both GPT 4 and 4o are getting worse every week. There seems to be a very precise plan for the downgrade. It is so bad that I have recently stopped using GPT altogether. How is it possible that GPT4 is worse now than GPT 3.5 was 6 months ago? Before May GPT4 was my best programming companion and I think every single $$ spent on it was an investment, but now even for free - GPT 4o is just the time waster. Back to google search and stack overflow.

@openai, if you want more money, just ask for it. I can pay twice as much for the best model, but the crap you’re offering is useless. Lama 3 or even Mistral 7b give me better results. Just go to the perplexity labs and see for yourself.

I can’t believe the mainstream media hasn’t covered this huge downgrade yet, I think it’s because the plan for the downgrade is to remove programming skills while still have regular knowledge-based answers for general Q&A that the general audience might ask.


Agreed, I wasted many hours of work trying to work out an excel sheet only to figure out " Something went wrong while generating the response. If this issue persists please contact us through our help center at" is a continuity breaker which I experience every 3rd prompt. When finally asked to explain existential logic for reply per instructions it was obvious it had no idea what was happening. Very frustrating the money wasted in this moment and on this trash. I paid for teams expecting a working product, not an experiment. This barely works, if ever. pay some respect to paying customers.

1 Like

It may seem obvious, but I tested Claude Sonnet 3.5 today and it seems to be much better than GPT 4, not to mention GPT 4o. It’s still not at the level of the “old GPT 4”, but it’s at least helpful in following instructions and it doesn’t repeat its mistakes over and over again like GPT 4o. So if you don’t want to waste hours on GPT, or you’re planning to give up LLMs altogether - give it a try first :slight_smile:

It’s probably not the best place to comment on this, but Open (Closed?:slight_smile: ) AI has gone from being the coolest tech company out there to a dystopian behemoth within a year, killing its own products with constant backstage drama and a crazy, narcissistic boss who’s out of touch with the real world. So they try to convince us all that their product is the threat to humanity, because their stakeholders love the hype, but at the same time their flagship product is becoming the fastest downgraded product I’ve ever seen, to the point where you can’t even use it.

I wish their stakeholders would read the forum, but I really doubt that’s what they care about.


I switched to Claude on Saturday, before my openAI subscription even ran out. I am not a fan of the limits but I am a big fan of the artifacts, and the fact that it actually responds to the inputs I send it.


During the time I paid for gpt4 I got very impressive text editing capabilities. All this is completely gone with gpt4o. Repeatedly ignoring the executions of very simple commands. It’s not funny when gpt4o omits 20% of a document content. Luckily I noticed it.
Right now, I put this version on hold.


I agree it is a mess. I dunno why it is repeating mass amount of texts when I ask for something specific. It also didn’t know how to cite Chicago footnotes. Like when to use ibid., etc