Why is ChatGPT getting from bad to worst?

I am using ChatGPT Plus.

Last year, using ChatGPT was a breeze. I love it and it listens to my instructions and prompts without any issue. The answer was detailed and superb.

Then January came and it became ‘lazy’. It would just summarise in a short paragraph without much details. I had to give several prompts over and over again to get it to give me the answer in detail as it used to. However, all is well as my GPTs were doing okay.

Now my GPTs are NOT working too. It doesn’t listen to its custom instructions and my answer is short and vague. When I asked it to re-do, it f**king replies, “Unfortunately, I can’t fulfill this request.”

Even old prompts that used to work are no longer working. It either give short or vague responses, or it repeats whatever was generated beforehand.

Not just me, my friends who are using ChatGPT have said the same. ChatGPT has deteriorated. It is as if OpenAI decides that from 2024 onwards they will reduce token usage and make the answer more summarised, vague and ‘stubborn’. My friends are even speculating whether openAI is doing this on purpose to force us to use up our prompts until it hits the cap limit quickly.

I have never felt so frustrated with ChatGPT before. This is just horrible. At this point I feel it is even pointless to have a chatGPT Plus subscription.

27 Likes

I saw on Reddit that some people said 3.5 works better for them than the GPT4.

Did you tried 3.5?

1 Like

I need GPT4 as my GPT and work require ChatGPT to access websites to do the work.

And honestly, it is getting frustrating as we speak because it is giving me, “I can’t fulfil that request” over and over, and my cap limit has been hit.

Believe me, I’m the patient and cool-headed kind of person but this month I think I’ve raged over ChatGPT several times I started spamming the “thumbs down” feedback.

5 Likes

I confirm the latest update is a drastic drop in the quality of responses. Currently better to use version 3.5 sometimes returns better answers and is free…
It’s a shame that in the space of just a few months the quality of GPT-4 has declined so much, I’m considering continuing my subscription especially since Gemini Pro which performs not much worse than GPT-4…

3 Likes

We have been building a chatbot based on the amazing results we got using ChatGPT 4 and file uploads. Now, we can only upload files to 4.5.
4 is still snappy and engaging, but 4.5 is lobotomized.
It is like a chatbot from 2018.

What ever is happening at OpenAI, I hope they fix it soon.

1 Like

Perhaps clarification of what API model that actually exists would help others understand?

Apologies… gpt4 turbo (not gpt 4.5)

Comparing to gpt4

1 Like

Yep, going back to gpt-4-0314 (API) is like a revelation. The AI will write “I’ve completed the function from your docstring. (20 lines). Here’s how it looks in your entire code (150 lines)”. And it usually works.

gpt-4-turbo-preview will follow the same path of coding mistakes as gpt-3.5-turbo. When it writes anything for you at all.

Been running tests on gpt4 for the past hour and it is still super impressive. The big issue now is OpenAI having taken the file retrieval away (restricting it to gpt 4 turbo), which now means all info needs to be put in the instructions if you want to use 4, which means the token usage shoots up.

We are considering using 4turbo to do document retrieval that we then pass on to 4. This would be a real hack though. Do you perhaps have any better ideas?

I experienced a big drop in response quality from GPT-4 as well (as of Jan 2024). Why is this happening? I have been a paying customer since day1 that it became available (and so did many of my colleagues) . We are all professional and use it for work but it’s very disappointing to see a big drop in quality of responses. Fix it please! Make it go back to the earlier days where it was doing great.

4 Likes

True, I have used it since inception, but the responses are now basically saying: I can’t do it. It literally gives me the same info as I gave it as a reply, word for word. What is going on here

1 Like

I’m feeling it too, it looks like the GPT 4 goal now is just burn you tokes. I’m having problems doing basic things I’d do it in a chain of 3 or 5 requests. Now I capping usage without any decent result

1 Like

It is absolutely horrible and since last night to today it has declined even more. WTF are they doing!!! Typical big tech bs. Let us run with it and see what it can do.

1 Like

I’m also noticing a much degraded experience using 4.5 as time goes on. Shorter more vague responses, refusing to give details… stubborn for sure. Even when I give it the information in the form of a short text document then ask it questions. I’ll have to “drag” the details out of it through successive queries. This is not a pleasant user experience and one can rapidly reach the usage cap with nothing accomplished.

2 Likes

Strangely, some messages in dialogues disappear. We also need to figure it out. Plus, the dialog names are generated somehow strangely (there is a bug). Moreover, when I reboot, I see the error and button to try a reply again

1 Like

Hi guys,

Librarian researcher and AI tester here.

On that decreasing quality of ChatGPT Pro /GPT-4 answers: do you have any link to a study, a preprint or a post by an SEO or Gen-AI dev/consultant?

Also: do you think that problem could in part originate from users’ feedback being wrong (because they not satisfied/rage/dont agree)?

Thanks in advance!

There are currently no studies, only an acknowledgement from OAI that they have been working on “reducing laziness” as it is a common feedback.

95% of the time it is this.

1 Like

GPT still has problem:

“when I reboot, I see the error and button to try a reply again”. In addition, answers strangely disappear.

1 Like

It’s honestly getting from bad to worse.

Today, it just rejected ALL my prompts in my GPT that I created.

What’s weird was my GPT was working perfectly fine just last week. Now nothing works.

Seriously, what’s the point of continuing the subscription when GPT 3.5 is way better than 4, especially when GPT4 doesn’t access the web or do anything I say?

2 Likes

Same here

  • Decreased quality of responses: The overall quality and accuracy of responses seem to have declined.
  • UI challenges: The increased emphasis on text input fields creates difficulty when dealing with longer texts.
  • Efficiency concerns: Certain tasks that were previously quick to complete now require significantly more time.
  • Ineffective response length: Short texts reply as too long to process…

Overall, it creates a sense of regression :frowning:

4 Likes