ChatGPT got worse over the months?

I just wanted to open this thread to rant about how bad ChatGPT has become for me. No matter how detailed I wrote prompts and how strictly I set rules for a specific output, for example a character minimum or limit ChatGPT is unable to. Even when instructing ChatGPT to use the Code Interpreter. Or ChatGPT forgetting context I set up in the conversation. It’s not better with Custom GPTs or and the knowledge base. It’s very very disappointing and I’m asking myself whatfor I pay money. I started using Claude and I get better results there. I feel really dumb to pay for such quality of output tbh…

9 Likes

There is a similar thread on this issue. Have a look here:

And yes, I also noticed the issues you are describing. I have a ChatGPT Plus subscription and noticed the degrading quality of ChatGPT4 responses over time.

6 Likes

Yes, forgetting the last output of itself often, ChatGPT has become horrible.

Claude is like talking to a 8 yr old vs. talking to a 1 yr old for ChatGPT.

You have to really guide ChatGPT and hold back the urge to yell that they are like a toddler in contextual awareness of what they last did. Both sort of dumb vs. humans when you get detailed, but overall Claude is great and ChatGPT makes me wanna cuss it out (and I do).

1 Like

You know what, it had been happening since they announced the launch of ‘GPT-5’. There is a big possibility that OpenAI deliberately downgrade their ChatGPT4 in anticipation of the ‘Chat-GPT5’. Most probably, people who use chatGPT 4 only use for ‘daily use’ does not feel much difference. However, people like us who use ChatGPT for high-intensive brainpower, complex instruction and detailed, in-depth prompts for intellectual pursuits or any other endeavor will suffer. In addition, ChatGPT 4 still suffers from ‘calculation deficiency’

Here is the main thread:

1 Like

I’m glad to know I’m not the only one. I think that if there was a metric that could show how often people become belligerent with ChatGPT, you could watch and tell when they nerf the model’s performance vs. when they do quality updates based on the metric of people swearing in fits of rage at ChatGPT.

When I start to feel abusive towards the AI, that’s when I know I need to take a break and wait for the next silent update to fix whatever they just broke without ever acknowledging it.

I just cancelled my account ~50 days ago and I’m using Anthropic and OpenAI API. Situation is such that I’m using Claude more as my focus is programming questions and GPT4 has been unsatisfactory for that.

I commented on a similar thread months ago, stating my frustration shortly after the launch of GPT 4 where I notice the significant drop of quality output.

I mainly use ChatGPT for content generation, SEO, coding and a little bit of programming. Also when trying to be be very specific or broad when it comes to custom GPTs and my custom instruction and my knowledge base I get bad results.

It’s amazing how useless the knowledge base is when ChatGPT forgets that a knowlege base even exists after the first reply. It’s frustrating how useless it has become.

Anthropic isn’t perfect either, but I would say MUCH better. It’s a little bit bit expensive to use for me, since I use the newest model where you pay for tokens. But I would rather spend more money on that, than on ChatGPT.

I have even many old of my chats in my history where I could clearly see how much better the responses were. I probably will also cancel my subscription after I have found a new workflow for myself. ChatGPT is still great for brainless tasks, like when I ask ChatGPT what I could cook with my ingredients or what’s the correct translation of a certain language is etc.

i can totally relate. i am paying for both chatgpt and github copilot. both are unable to perform simplest tasks for coding anymore. the times when i could actually expect to get a working code has long been gone.

i am living in germany so i can’t pay for claude unfortunately, but claude currently feels like chatgpt 1 year ago - it is responsive and understands what i mean in most cases.

it is unfortunate that we in europe are stuck with openai for now because of regulatory hell people who has zero idea about the industry are putting us in.

2 Likes

Thanks! Exactly. I’ve noticed that in the month of April, ChatGPT has gone back a lot. It has become completely unusable to the point that I could write a better algorithm resting in the backyard in my free time.

It happened during that big update they pushed making ChatGPT more humanlike and less AI. Since then it has turned completely lazy and has not provided a single accurate answer . This is not an exaggeration. Users who are familiar will know what I’m talking about.

Now the monthly fee is still a skyrocketing $25, but the quality of the service has only gone down since then. 70% of the time the premium access is not accessible due to the service being lagged out, not available, clunky, slow and unreliable.

I’m actually appalled at this monumental display of negligence.

Remember OpenAI, the only person who will hold you accountable is yourself. So this is a friendly reminder to do better or I will cancel all affiliations with ChatGPT4 everywhere and look for alternative solutions.

1 Like

I agree with this sentiment and I suspect that only “a select few” users can enjoy what used to be the “premium” chatGPT4 and the API users who can get 32k context (from the youtube presentation by OpenAI). I was really excited last year 2023 was the golden year of chatGPT. This year… not so much, probably corporate users are prioritized as they are the money makers and Microsoft had to have the biggest pie as they are the biggest investors to OpenAI. I suspect that this degradation and unresponsive service will stay until “new subscription” or newer version of GPT is out. As long as those “strict guidelines” exist for plebian, their service for “commoner” will keep degrading except for a few premium users. I can testify this because I am both “commoners” and their “premium” user at another “corporate” account.

ever since the outage, it’s nearly useless. what is the point of paying for the premium subscription?

Will OpenAI fix it or will it just continue getting worse? Any tips on better prompting to make this version to provide even semi-intelligent outputs?

Could this be due to the problem of getting new human-generated material for training the thing?