Has anyone else noticed a decline in the quality of ChatGPT responses over recent updates? It seems like answers have become worset and stupid, sometimes missing the mark entirely, and, in some cases, even providing responses that don’t make sense or feel “dumbed down” compared to previous versions.
I found myself struggling to get meaningful answers and have had to switch to using Claude for a better experience. Is this a widespread issue? Are there particular areas where the quality has dropped the most (e.g., technical advice, language comprehension, complex analysis)? Would love to hear if others are experiencing the same and if anyone has found effective workarounds!
I’ve noticed the same with both simple coding and generic questions. It’s never been perfect, but now it seems much more like a google search where it will return generic information rather than responding to the specifics of the request.
It’s responses have degraded in quality at an alarming rate if I’m honest.
I asked it a simple question specifying to google and confirm so I’m not wasting time and it returns a generic answer as if copy pasting the first google result sentence. When I specify further requirements of the request to hone in on a good answer, it literally breaks the requirement instantly and regurgitates the same answer. And I’m using 4o.
If I wanted to experience a joke I’d watch a comedy show, not visit gpt for garbage answers for a service I pay for.
Do better, OpenAI
I remember that in December 2023, the experience was essential for my work. Regarding the performance of GPT PLUS, it did not present the inferiority of responses as is occurring today. Unfortunately, things like this happen and are beyond our control. This topic is important to inform developers who are committed to improving the tool and the user experience, identifying errors, points that can be improved, reviewing and improving.
I agree and get more and more frustrated with the answers from ChatGPT.
In the past it was a great help, but the answers are getting worse. Instructions seem to be partly understood and even when correcting, the prompt delivers not the right answers.
Yes it has! around mid November I am not the only one that has perceived this as well, it has almost no context, I am actually thinking about cancelling until I see a better quality.
I completely agree with you. Starting this month, I’ve decided to cancel my ChatGPT subscription and focus on Claude instead. The quality has really dropped, and even their older models like GPT-4 (01) feel so limited now.
Yesterday, I tested it and couldn’t even attach a long file—it just gave an error without any explanation. It’s honestly frustrating and disappointing. Hopefully, they’ll release a better version soon!
Tested also Claude. Was more than impressed about the quality of their prompts and subscribed. I am waiting a few months on ChatGPT and if it doesn’t improve to their former quality, I will cancel my subscription.
Thank you for sharing your experience with Claude! It’s great to hear you’re impressed with their prompts, and Let’s see how ChatGPT evolves in the coming months.
Also, please don’t hesitate to share any AI tools that can make our coding work easier. It would be greatly appreciated!
I tried Claud. Sincerely? He surprised me for my purpose. It has excellent value for money. Furthermore, he commits few illusions. Despite this, it is not immune to errors and problems, like any tool.
Try using both tools: Claud and GPT Plus. I have been doing this to complement my tasks and goals.
Claud even provides a manual on how to correctly send prompts to obtain more assertive responses.
Declined is an understatement -try tanked. The program was able to produce predictive, natural speech patterns, with extremely well sourced material when asked.
Now it is simply giving robotic responses and extremely poor sources. If this continues, I’m going to get out of my contract with the company and choose a better AI. ChatGPT, especially 4o pro Was perfect for my company’s needs. Now it’s essentially useless. And if it gives me one more response from wiki or Al Jazeera, I’m going to puke. My field uses think tanks and very objective sources. We never use wiki, Al Jazeera, MEMRI, or any of the other highly biased source materials.
I thought I was going crazy.
I asked better quality sources, updated memory, uploaded the source material and it kept giving me stupid common answers based in a simple google research.
In deciding an AI tool to use, I think its not only the model itself a variable for choose, but also the cpu time/tokens in output. Read this post on Quora: