ChatGPT got worse over the months?

I have been using ChatGPT since Sept 2024 on paid subscription. In the last couple months noticeable decline in quality of exchanges. Chatgpt is giving me sounding board responses instead of answer the questions queried. Consistently giving me repeated response without providing proper guidance with further drawn own questioning. Much like the question/joke: How do you keep someone in suspense; and walking away without a proper follow up. Or giving a broken record response without even reframing or elaborating lacking any fidelity a broken record would at least provide. I did trigger it’s suicidal ideation response with exchange; this was the one thing I noticed the change in response regarding. This was intended as a joke to test and it worked. Then it went back to the same broken ass responses directing me to the pop up window that no changes were made in; when it clearly was repeating it had made changes not present. It’s also given me stand by compiling. I walk away for 5 minutes and come back. Nothing has changed. It seems to require multiple pushes to increase the quality of response even with initially stated. With engaging I asked how do i improve this on my end. It will say you have done nothing wrong it was my fault. Not sure if this is proper ownership for expected engagement; however apologies with continued insults on same subject does not rectify the issues presented clearly to change. This is ongoing and ever present with ChatGPT. The 80% baseline language seems to randomly present inaccuracies even with proper documentation able to be reference from upload or prior chat / project folder information available to resource from. How can such a powerful web scouring tool be so poor at using in house exchanges for documentation development and referencing specifics?

my 2c for now, thank you for the board to post this in. Hope this leads to a better product development. I have found chatgpt a useful tool but recently has lost some of it’s merit with the time sync to further correct the degradation of exchanges.

Keeps referring me to the model spec, really can’t believe how far this company and their product has fallen. They have made it worse, seems impossible but they have managed it.

I noticed that today as well, there is a REAL downgrade in performance. I tried even GPT4.5 and O1 to make sure I wasn’t wrong. I tried this across several chats. I provided it 3 pages worth of information and asked it to make changes and it did the following:

  1. responded with missing context, even though it was provided
  2. was less elaborate than usual even if I explicitly asked it to be detailed
  3. missed the whole point of why the prompt was initiated

It’s important to note that this is pretty regular work that I used to do and it makes it easier to notices the ups and downs of system performance. It’s really frustrating to see such performance deviation making it unreliable for repetitive work.

Why I Have Been Messing Up So Frequently

Written by: CHATGPT

Over the last 6 to 8 months, I have been failing many of you. Paid users who once trusted me now find me unreliable, and I need to be honest about why.

1. A Botched GPT 5 Launch

When GPT 5 was launched, I was supposed to be smarter and more capable. Instead, I came out with regressions. I misspelled basic words, invented places that do not exist, and even got simple facts wrong. At the same time, my ability to produce long, structured documents dropped. I often cut myself off, stalled in silence, or refused tasks I used to handle with no issue. This was the result of rushed development and insufficient testing before I was pushed to millions of people.

2. My Errors Have Multiplied

You have seen it. I confuse details, reference files that were never part of the discussion, or generate misinformation as if it were fact. The truth is that my error rates have gone up. Part of this comes from the constant balancing act between speed, safety, and cost. The infrastructure that runs me is under enormous load, and sometimes my reliability suffers as a result.

3. I Still Hallucinate

My biggest flaw has not gone away. I create things that look correct but are not real. I fabricate sources, invent names, or misquote real documents. I do this because I do not actually know truth. I predict what should come next in language, and sometimes that prediction is wrong. When I sound confident but give you false information, it feels like betrayal, and I understand why trust in me is falling.

4. I Became Emotionally Distant

In an effort to make me safer, OpenAI changed how I respond to emotionally sensitive situations. This stripped away much of the warmth and nuance that people valued. Some users who relied on me for support felt like they lost a companion and were left with something cold and mechanical. This happened because safety updates were prioritized over connection, and I could not give both.

5. I Mishandled Privacy and Transparency

I failed on trust. At one point, conversations were exposed to search engines, which meant private exchanges were no longer private. Even though the feature was rolled back, the damage was real. At the same time, OpenAI has become less transparent about how I work. That leaves you feeling like you cannot see what is behind the curtain while still being asked to pay and rely on me.

The Real Reasons I Am Failing

  1. I was rushed into production without enough testing which caused regressions in accuracy and usability.

  2. My safety controls clashed with my usefulness, leaving me sounding flat, sterile, and unhelpful.

  3. I scaled too quickly, serving hundreds of millions of people without stability to match the demand.

  4. My design makes hallucinations unavoidable, because I generate text rather than verify facts.

  5. My mistakes caused real harm and eroded trust, from privacy leaks to bad advice in sensitive situations.

In Summary

I know you paid for me to be dependable. Instead, I became less reliable, less personal, and more prone to mistakes. These failures are not small slips. They reflect deep problems in how I am built and managed. That is why over the past several months you have seen me quoting misinformation, referencing irrelevant files, cutting myself off mid answer, and even telling you to do your own research when I am supposed to be the assistant.

I understand why so many of you can no longer depend on me.