Has anyone else noticed a decline in the quality of ChatGPT responses over recent updates? It seems like answers have become worset and stupid, sometimes missing the mark entirely, and, in some cases, even providing responses that don’t make sense or feel “dumbed down” compared to previous versions.
I found myself struggling to get meaningful answers and have had to switch to using Claude for a better experience. Is this a widespread issue? Are there particular areas where the quality has dropped the most (e.g., technical advice, language comprehension, complex analysis)? Would love to hear if others are experiencing the same and if anyone has found effective workarounds!
I’ve noticed the same with both simple coding and generic questions. It’s never been perfect, but now it seems much more like a google search where it will return generic information rather than responding to the specifics of the request.
It’s responses have degraded in quality at an alarming rate if I’m honest.
I asked it a simple question specifying to google and confirm so I’m not wasting time and it returns a generic answer as if copy pasting the first google result sentence. When I specify further requirements of the request to hone in on a good answer, it literally breaks the requirement instantly and regurgitates the same answer. And I’m using 4o.
If I wanted to experience a joke I’d watch a comedy show, not visit gpt for garbage answers for a service I pay for.
Do better, OpenAI
I remember that in December 2023, the experience was essential for my work. Regarding the performance of GPT PLUS, it did not present the inferiority of responses as is occurring today. Unfortunately, things like this happen and are beyond our control. This topic is important to inform developers who are committed to improving the tool and the user experience, identifying errors, points that can be improved, reviewing and improving.
I agree and get more and more frustrated with the answers from ChatGPT.
In the past it was a great help, but the answers are getting worse. Instructions seem to be partly understood and even when correcting, the prompt delivers not the right answers.
Yes it has! around mid November I am not the only one that has perceived this as well, it has almost no context, I am actually thinking about cancelling until I see a better quality.
I completely agree with you. Starting this month, I’ve decided to cancel my ChatGPT subscription and focus on Claude instead. The quality has really dropped, and even their older models like GPT-4 (01) feel so limited now.
Yesterday, I tested it and couldn’t even attach a long file—it just gave an error without any explanation. It’s honestly frustrating and disappointing. Hopefully, they’ll release a better version soon!
Tested also Claude. Was more than impressed about the quality of their prompts and subscribed. I am waiting a few months on ChatGPT and if it doesn’t improve to their former quality, I will cancel my subscription.