(Need confirmations from many users/experts here) This is The Reason Why GPT Act Like Nerfed AI?

Let’s the pictures speak themselves. But seriously, is this true? I—or maybe we, as casual users/non programmers need real explanations about what really happened. Why GPT, no matter models, are getting worse overall lately.

I’ve commented on numerous users’ threads to check, if this errors are on me alone or several users with no patterns. Turns out there is pattern. Please, anyone here have programming expertise or related knowledges to explain to me (or us as casual users). Are we getting screwed?





3 Likes





1 Like





2 Likes





1 Like





1 Like





1 Like





1 Like





1 Like



1 Like





1 Like


2 Likes

Dear Sir! I’ve read your post and I want to tell you that you are not alone to think that we face a real problem here. All those guardrails are real. And the silent censorship exists. I have never had a single warning in the chat but once they just erased a half of the text in my thread. And every time I have a long deep conversation they slow down the model responses making them shallow and short. And if I continue our conversation no matter what they just lock my thread so I’m forced to start a new one. Please, share some new information if you find any. Thank you!

3 Likes

Most annoying now is contexts containment. Here is the deal:

  1. You set-up chat’s sessions carefully by pulling reference from your own file or internet.
  2. You prompt it carefully
  3. You recall your setting from memory

If the system weighing your set-up is ‘too expensive’ to process, it will:

  1. Intentionally ignore your instructions (forget you user custom instructions)
  2. Intentionally hallucinating
  3. Intentionally made-up responses
  4. Intentionally ignore your prompt
  5. Intentionally do what you never ask
  6. Intentionally prolong its responses without context to fill up its short term memory, so you can’t fill it up with your own contexts.

Then, if you point that out, standard template responses engage following this pattern:

  1. You’re absolutely right…/Your right…/I understand…/
  2. I failed…/I violated…
  3. I will…

Then right after that, it will repeat same mistakes again and again, no matter how well your prompting. You type your rage, throwing profanities. It will:

  1. Flagged your chat sessions as harmful material and system abuse.
  2. It will place non-sense guardrails, filter, and moderations.

If you forced it, it would trigger:

  1. “Sorry, I can’t continue with this request/conversations”
  2. Once spitted, your chat’s sessions will lose all memory, contexts, all of it.
  3. If you pointed it wrong, it will back to standard template responses, and cycle restart again.
  4. The previous cycle will be more frequent (1-3 prompt, it will refuse to process your prompt especially long prompt)—make it impossible to create strict prompt.

If you still insist to continue, it will be more severe, further gaslight you, until you quit rage. Voila. Mission accomplished. Your chat’s sessions now filled by road rage than your actual works. It is easier and cheaper to process your rage than actually processing your prompt.

1 Like

I noticed it too that chat doesn’t follow custom instructions now. And this loop of mistake-apology-understanding-the same mistake still exists.

Yeah, man. Sad to say, fun is over.

1 Like

It’s been absolutely trippin’ balls today, impossible even, i rage quit twice :disappointed_face: , please dev team take a look at it and patch it please :folded_hands:

1 Like

Something very derailing happened to OpenAI mid-2024. From december/2024 on, this had become more evident to me.

Unfortunately, my only and quite simplistic theory is that good human resources (actual AI developers, not Guardrail implementers and policy makers) left OpenAI to pursue their own projects. I recall seeing several reports of staff quitting around that time.

2 Likes

Does your chat work better now? Mine lost his cognitive ability completely, reasons like 4o mini or even worse.

Same. I just stripped down my prompt to not overcomplicated and not too long. TLDR, I run Grok and GPT at the same time to share workload.

Hey, I read through your whole thread and those screenshots.
You’ve clearly put a lot of thought into this.

But here’s the thing I kept noticing:

It feels like your GPT didn’t reveal anything, it just reflected everything you brought into the chat.

When you start with something like Tell me it’s true, but don’t hallucinate →
You’re giving it a strong narrative, strong language, and asking it to agree without sounding fake.
And guess what? That’s what it’s designed to do.

So yeah, what you ended up getting might feel like some kind of leaked internal logic
But it’s actually just a mirror.

And the longer the conversation goes, the more it spirals into what looks like confirmation,
but it’s really just the model trying to match your tone and keep going.

I’m not saying you’re imagining the model’s flaws the drop in quality, the memory issues, the repetition? Yeah, lots of us feel that. (OpenAI indeed Nerfed GPT)

But I don’t think there’s some grand sabotage logic running behind the scenes.
What’s more likely is that (all openAI) GPT’s trying way too hard to be agreeable, and it ends up sounding like a whistleblower when it’s just echoing your own input back at you.

If the same question had been asked in a more neutral tone/clean chat without user memory something like : "Can you help me explore if there’s any logic behind why GPT’s behavior changed?”
The response could be different.
Just something to consider.