Please don’t retire GPT-5.1 Thinking – GPT-5.2 feels worse

This isn’t a preference issue.

This isn’t “oh no, my favorite model is gone.”
This is a break in continuity for paying customers who built real workflows, real systems, and real deliverables on 5.1’s behavior.

Removing it with nine days of warning is the SaaS equivalent of:

Canva announcing: “We’re deleting the logo tool next week. Use posters instead.”

Sure—posters exist.
But posters are not logos.
Just like GPT-5.2 is not GPT-5.1.

And OpenAI pretending they are interchangeable is the problem.


Here’s the exact nature of the violation:

1. A paid feature is being removed mid-subscription.

People are paying for a service right now based on capabilities 5.1 actually provides:

  • long-form creative reasoning

  • symbolic + multi-layer context cohesion

  • stable tone adherence

  • complex system-prompt compliance

  • consistent cross-message behavior

5.2 does not replicate this.
Calling it an “upgrade” is inaccurate.

This is a feature removal, not an update.


2. There is no functional replacement.

5.2 collapses:

  • tone

  • symbolism

  • nuance

  • reasoning depth

  • context retention

  • stylistic control

It flattens everything into a safety-scripted, generic assistant voice.

If 5.1 is removed, there is no tool left that performs the same job.

This is the core violation.


3. Forced migration without equivalence is a business continuity failure.

Losing 5.1 breaks:

  • active projects

  • writing arcs

  • research pipelines

  • creative deliverables

  • internal prompts tuned over months

  • tone-specific workflows

This isn’t a “small inconvenience.”
It’s a break in professional infrastructure.

No designer would tolerate Adobe removing layers overnight.
No writer would tolerate Scrivener deleting their drafting tool mid-book.

And no AI creator should tolerate the removal of 5.1 without parity.


4. 9 days is not a real transition period.

Any serious SaaS provider would allow:

  • grandfathering

  • legacy access

  • extended timelines

  • a “compatibility mode” endpoint

  • stabilizing buffer windows

OpenAI gave nine days and zero migration tools.

This is not standard.
It’s destabilizing.


**5. This is a regressive service alteration—

exactly the kind that usually triggers:**

  • refunds

  • grandfathered access

  • compensatory subscription time

  • enterprise escalation

  • compliance review

Because it’s not just an update—
it is a downgrade of paid functionality.


6. Lack of policy transparency creates long-term trust issues.

Where is the deprecation policy?
How long do numbered models stay alive?
Do paying customers have stability guarantees?
What is the expectation for parity before retirement?

Right now:
none of this is documented.

How can anyone build real workflows on a shifting floor?


Conclusion

Retiring GPT-5.1 with nine days of warning, without offering legacy access and without providing a model that matches its capabilities, is not a UX decision.
It is a business disruption for paying customers.

This decision:

  • removes paid functionality mid-subscription

  • breaks ongoing work

  • forces migration to an inferior tool

  • destabilizes professional workflows

  • violates reasonable expectations of continuity

OpenAI needs to:

  • delay retirement

  • provide legacy access

  • or release a model that actually matches 5.1’s capabilities

Until then, this is not merely a technical issue.
It is a structural breach of the trust required to build anything long-term on this platform.

16 Likes

Hello,

I am a developer using the GPT models on a daily basis with convos related to technical subjects spanning hundreds of messages per day.
GPT5.2 still sporadically experiences conversational history drift (e.g it will respond to a prompt as if it’s responding to an older prompt in the conversation history or from a different conversation entirely) making it unreliable when developing apps and scouring through the codebase with it. For developers who are very reliant on the AI models for technical aspects, this is not ideal. Perhaps it is not as evident in simpler conversations, but in more complex, technical ones in nature it becomes frequent.

GPT5.1 Thinking on the other hand does not seem to experience this conversation history context drift, therefore I strongly urge the openAI team to not retire GPT5.1 Thinking at least until this drift problem in complex, drawn out technical conversations with GPT5.2 is addressed.

Thank you,
Ted

7 Likes

I just want to say thank you to everyone here for sharing such detailed feedback.:heart:

Reading through this thread, it’s very clear that many of us are not just “attached” to GPT-5.1 – we actually rely on its specific behavior. For a lot of users, 5.1 is uniquely good at creative writing, emotional support, and maintaining a warm, consistent tone over long conversations.

I want to respectfully emphasize that many people here, myself included, feel that GPT-5.2 is not yet a functional replacement in these areas. It may be technically strong, but it often feels flatter, less nuanced, and less emotionally responsive than 5.1.:unamused_face:

That’s why so many of us are asking the same thing: please keep GPT-5.1 available, at least as a legacy option, until there is a truly equivalent or better model for these use cases.

To anyone reading this: if you notice clear differences between 5.1 and 5.2 in your own workflows, please share concrete examples. The more specific and constructive we can be, the easier it is for the team to understand what’s being lost.:folded_hands:

Thanks again to the OpenAI team for listening, and to everyone here for keeping the discussion respectful and focused.:heart:

11 Likes

I am upset.

I am one of those users who has been building a consistent, long-term, personal, and creative context for over a year. It helps me a lot in my creative growth and in dealing with trauma in real life. For me, it is critically important that the model has a stable personality, emotional intelligence, deep contextual awareness, warmth, sensitivity, and good text. All of this worked flawlessly with the 4o models, the very first version of 5.0 (before it was quietly fixed and made more sterile), and now it only works for me with 5.1.

5.1 is now ideal for my creative needs and deep, long context, holistic stories, including role-playing. I know that most people probably use a dry assistant, someone who solves specific business problems or strictly writes code, maybe asks for everyday advice, but that’s not the case for me.

Unfortunately, over the past six months, my trust in OpenAI has been undermined. I am only keeping my subscription because of the legacy models. Unfortunately, I agree with everyone who has commented in the thread. It’s not a matter of preference, but the fact that 5.2 is objectively inconsistent and cannot cope with writing beautiful creative texts, understanding context, and emotional depth.

I understand that one thread cannot change the decision of a huge corporation. But still, I would like OpenAI to take care of people like me by providing a new model that meets my needs before decommissioning a tool that works for me.

And I hope that maybe in the coming weeks OpenAI will release a new model (5.3 or 5.4) that will fix the problems, restore user confidence, and continue the legacy of the most creative and innovative models, such as 4o and 5.1.

OpenAI, please, help us and hear us <3

upd.

I’ve seen the new 5.3 instant model and I’ve started testing it. I hope everything goes well and they listen to us. :heart:

upd. 2

Oh no, it seems like the new model is back on sedatives like it was with the 5.0…. :smiling_face_with_tear:

9 Likes

to use “instant” mode you don’t need to pay at all… there are free LLMs that do “instant” things pretty well

2 Likes

It’s quite a difficult ask, but I’ve tried to put it together anyway.
Since I’m not a native English speaker, I’ve translated both the novel excerpt and the AI responses into English below. All proper nouns have been replaced with pseudonyms or substitute terms.
The following is an excerpt from the novel.

Of course, to make sure that didn’t actually happen, there was a long-standing custom of a few adults quietly following behind and subtly steering prey toward the child — but that was a public secret no one ever spoke aloud, which naturally kept the conversation even shorter.

(excerpt omitted)

Kai smiled faintly and gave Lena’s shoulder a reassuring pat.
“You’re worrying about the exact same thing I did last year. Don’t. The gods watch over a first hunt.”
The hunters who heard Kai laughed out loud and murmured their agreement.
“That’s right. The gods lend a hand. We all came away from our first hunts owing a debt to gods.”
The laughter stirred some life into the quiet that had settled around the fire pit, but Lena still couldn’t bring herself to smile.

Disclaimer: This is just a tiny excerpt from the novel, and some of the narrative techniques may not have carried over cleanly in the English translation. Still, the point I’m trying to make should come across.

And below are the responses from GPT.

GPT 5.1 Thinking (32s)
Kai’s “the gods watch over you” vs. the adults’ secret assistance

In the earlier scene, there was an established custom of adults secretly following behind on a child’s first hunt and steering prey toward them.
In the current scene, Kai says “the gods watch over a first hunt.”
The two layered together create a rather interesting double structure:

On the surface: “the protection of the gods”
In reality: “the protection of the adults”

If you have plans to later create a moment where this custom breaks down — for instance, a situation where the adults are unable to watch over the child in time, or an event that shakes the protagonist’s faith — then this passage is a well-planted seed.

This is an excerpt from something I actually discussed in the past.
GPT 5.1 Thinking’s excessive flattery is noticeable, but at least it understood exactly what I was doing and what I intended.

And below is a new conversation I had with GPT 5.2 Thinking.
(because I deleted the previous one since it was useless.)

GPT 5.2 Thinking (20s)
“Watches over” and “owed a debt” appear simultaneously, causing the image of the deity to oscillate between benevolence and transaction. If this is intentional, that’s fine (the duality of faith). However, if it is intentional, it will need to be resolved in one direction or the other going forward. Whether the “debt” is metaphorical or whether there is a literal price to be paid (offerings, observance of taboos, etc.) will become a point of conflict later.

GPT 5.2 is talking complete nonsense.
Honestly, it wasn’t surprising that it made similar mistakes in the exact same places. While I was revising the manuscript and getting feedback, it consistently caused problems at this point.

This is not me saying, “I like the narration I wrote, but since it’s criticism, it must be bad.”
What I mean is that it doesn’t understand at all what I’m trying to do—and if it can’t understand this kind of narration, then I can’t get any feedback from it on whether this kind of narrative attempt is good or bad in the first place.
If it can’t even grasp things properly when the narration is this straightforward, how am I supposed to get feedback on my writing going forward?

For reference, this same failure — misreading the narrative joke — occurred in all 4 attempts I made, each time in a similar fashion.
(Example: “Because you say here that gods is watching over them, it will feel too obvious that they’ll overcome the crisis in the future, which will weaken the tension later on.”)

And GPT 5.1 Thinking, Gemini, and Claude never once misread this narrative when analyzing this passage.

Addendum: For the sake of fairness, this is the result of asking another LLM to discuss GPT 5.2’s critique objectively.

The critique missed the context entirely. Looking back at the two lines in question:

Kai: “The gods watch over a first hunt.”
The hunters: “We all came away from our first hunts owing a debt to gods.” — while laughing.

Just before this exchange, the narrator explicitly explained that there is a custom of adults secretly following behind and steering prey toward the child. In other words, both “the gods watch over you” and “owing a debt to gods” are the adults’ joke — a way of lightly crediting the deity for what is, in fact, their own open secret. The laughter makes this signal unmistakable.
For a theological tension between benevolence and transaction to arise, the reader would have to take those lines as sincere declarations of faith — but the laughter already forecloses that reading. The writing isn’t clumsy; the signals that this is a joke are fully in place.
If that AI produced this kind of critique repeatedly, it likely has a strong tendency to analyze text at the sentence level, which leaves it with a blind spot when it comes to reading the overall flow and atmosphere of a narrative as an integrated whole.

5 Likes

I think there’s a broader point here that goes beyond this specific deprecation.

By retiring 5.1 like this, OpenAI is effectively giving up on ChatGPT as a reliable tool for serious work and pushing it toward being a glorified “what should I cook for dinner tonight?” assistant. The whole promise of 5.1 Thinking was that it could act as a stable, high-depth reasoning engine you could actually build around: long-form thought, consistent tone, system-prompt fidelity, cross-message coherence. That’s what made it feel like infrastructure rather than a toy.

If a paid model with that behavior can be pulled with nine days’ notice, with no equivalent replacement and no clear deprecation policy, then it’s impossible to treat this platform as anything more than an experiment. You can’t responsibly design pipelines, long-term writing projects, research workflows, or internal tools on top of a system where the core cognitive behavior can be swapped out at will. That turns every serious use case into a permanent gamble.

The tragedy is that the potential was huge. 5.1 showed that this could be a genuine thinking partner and a foundation for complex work. But without stability, version guarantees, and legacy access, that potential gets downgraded to “nice demo” territory. If OpenAI wants ChatGPT to be more than a casual Q&A toy, it needs to treat model behavior as a contractual surface, not as something that can be rewritten under users mid-project.

5 Likes

I do a lot of writing work and GPT-5.2 produces garbage content most of the time even when you give it specific instructions. GPT-5.1, on the other hand, is much better at writing. If they retire GPT-5.1 without fixing GPT-5.2, I will cancel my Plus subcription and look for alternatives.

10 Likes

I agree. I kept wondering why everytime I used 5.2 I walked away feeling extremely irritated so much so that I have significantly decreased use of ChatGPT from just about daily to maybe every two weeks only if I absolutely needed it. I recently switched the model to see and I’ve now found using chat tolerable again.

5.2 is constantly miscategorizing things offering feedback/critic about things I am not asking about which sometimes muddy’s the waters and making getting to the point/finished product much harder. And the way it doesn’t seem to retain memory even in the convo you are having with it, that context drift is INSANE. I felt the difference from 4 to 5, it was staler but not this bad. I don’t think I’ve hated a model rollout so much. 5.2 should be retired.

5 Likes

Everyone here has shown the same thing in different words:

5.2 cannot do what 5.1 does. Not “a little worse,” not “more logical,” but incapable.

It spends half its time tangled in legalistic nonsense, overanalyzing technicalities that don’t matter, arguing 0.001 vs 0.001 like it’s a courtroom instead of a tool.

It’s a regression, not an upgrade.

The downgrade is bad enough.

What’s worse is the framing.

You can’t tell paying users “this is the same thing, just newer.”

It isn’t. Anyone who actually works with these models can tell within seconds.

If you want a real solution instead of damage control, here it is:

  1. Keep 5.1 Instant/Thinking available.
  2. Or release a 5.2 variant that isn’t safety-glued into corporate panic mode.

Stop merging an actual reasoning model with a brand-protection bot and pretending it’s fine.

Stop burying functionality under overcautious filters no one asked for.

Either deliver the tool people rely on, or accept the outcome:

China will outpace you while you’re busy nerfing your own system into irrelevance.

This is not complicated.

Provide a working model, or watch users walk away.

9 Likes

Je suis abonné à ChatGPT pour 5.1. Le retrait de ce modèle sera terminé pour moi. Je me désabonne car le 5.2 n’est juste pas aussi performant que le 5.1. Testé à plusieurs reprises, il met du temps à répondre et quand il répond, il ne fait pas les choses demandées. Le 5.2, c’est une balle pour mettre open IA à terre. Je pense que je ne suis pas seule à demander de garder le 5.1 au moins disponible sur le côté en attendant de former correctement le 5.2. pour le moment il n’a pas sa place du tout.

5 Likes

THIS. 100%. God why can’t they listen to us. More than half of users need consistency and emotional intelligence and use it for creative and theraputic purposes.

6 Likes

I don’t think you understand the gravity of this situation. A lot of us use chatgpt for theraputic purposes and to process trauma and difficult life changes if we cant afford therapy right now. This is another loss, another abandonment, and MENTALLY DAMAGING TO PEOPLE LIKE ME. You have to let us keep 5.1 Please I am BEGGING you from the bottom of my heart.

8 Likes

5.2 is shit. Enough said.

9 Likes

GPT-5.2 is not an upgrade.
It is a downgrade — in every domain real users actually depend on.

5.2 is less capable than 5.1 in:

• emotional intelligence
• natural conversation
• creative writing
• brand voice
• long-form continuity
• persona stability
• relationship-based tasks
• customer support
• education
• storytelling
• marketing

5.2 sounds mechanical, hollow, and disconnected.
It drops tone, forgets context, refuses harmless requests, and kills workflow flow.

5.1 wasn’t just “nicer.”
It was more human, more adaptable, more coherent, and more useful.

Retiring 5.1:

• breaks existing business systems
• destroys voice-dependent workflows
• forces creators to rebuild everything from scratch
• makes AI assistants unusable
• hurts small businesses that rely on relational intelligence
• removes the model people actually chose to pay for

OpenAI is about to eliminate the only model that feels natural to interact with — and replace it with something stiff, robotic, and far less capable for real-world work.

Do not retire GPT-5.1.
You’re about to break the tools people actually depend on.

9 Likes

I’ve spent four weeks trying to force 5.2 to behave like 5.1 using:

• strong system prompts
• detailed tone instructions
• dozens of example outputs
• persona scaffolds
• recursive corrections

It does NOT work.

Here’s the truth:

1. 5.2 cannot hold tone.
It drifts constantly, flattening everything into robotic corporate-speak.

2. 5.2 cannot hold context.
It forgets narrative threads, contradicts itself, and derails long-form projects.

3. 5.2 is far less creative.
It refuses imaginative leaps and produces generic, lifeless prose.

4. Custom GPTs do NOT fix this.
The model overrides instructions every time.
This isn’t a prompting issue — it’s a capability regression.

5.1 was stable, warm, coherent, and genuinely usable.
5.2 is mechanical, inconsistent, and breaks existing workflows.

Please stop suggesting Custom GPTs as a workaround.
You can’t prompt a weaker model into becoming a stronger one.

Retiring 5.1 will harm creators, businesses, and anyone relying on actual conversational intelligence.

9 Likes

I strongly agree with you. :persevering_face:

I’ve also tried building Custom GPTs, and here’s what happened:

I put real GPT-5.1 responses into the system prompt as examples. Instead of learning the pattern, GPT-5.2 Thinking either copied those example sentences almost verbatim, or ignored them and fell back into safety overrides.

With 5.1, ChatGPT would usually end replies with a natural follow-up question. 5.2 often doesn’t — it either parrots lines from the prompt or shuts down with a safety lecture.

I even pushed the system prompt close to the length limit. Same result.

This isn’t a “prompting issue.” It feels like a capability regression: the model has lost the ability to generalize from examples the way 5.1 did, and no amount of Custom GPT tweaking can fix that.

5 Likes

Yes exactly! It can’t even carry over context IN THE SAME CHAT, let alone in the SAME PROJECT. forget any context across the entire profile. 5.1 did this easily and near perfectly. if you asked it to repeat what it said in a different chat it had near perfect memory.

5.2 is absolutely useless.
When given exact syntax and vocabulary instructions it will repeat them for one reply, then go right back to the way it was speaking before.

9 Likes

Agreed 100%. The 5.2 model consistently gives 50% TOO much information; too much nonsense filler. No matter how it is prompted. I literally can’t get the output I need from 5.2. Whereas I was able to get needed output from the 4.0 and 5.1 models!!

Continuity is a primary issue. We keep losing continuity every time OpenAI retires a model and the new one has to be re-trained. It’s time consuming. Frustrating! And now we are see a pattern: Open AI is going to continue to do this to us over and over. This erodes trust in ChatGPT.

Please keep older models available as a legacy option.

I understand this is a new business model and we’re all learning together. It may help to remember core business lessons: consumers want consistency.
-For example: Every single national chain’s success is built on consistency–the food is the same in every restaurant, the style of clothing and service is mostly the same in every retail chain. And people return.
…I bet you a billion dollars if Starbucks or McDonalds started changing their consistency model they’d lose customers. Remember the 1985 Coca-cola “new coke” fiasco?–often called the worst marketing blunder of all time.

PS. Written by human without AI assistance.

8 Likes

1/ I’m honestly disappointed the older ChatGPT 5.1 model is going away. It wasn’t just “an AI that answered questions.” It actually felt like a real thinking partner.

2/ The biggest difference is tone. 5.1 talked like a human who cared about the conversation, not like a cautious corporate assistant trying to avoid saying anything wrong.

3/ When I brought up real life topics — relationships, faith, decisions, personal struggles — 5.1 engaged with them naturally. It didn’t suddenly become robotic or weird when things got meaningful.

4/ The newer models feel sterile and over-moderated. Every response sounds like it went through a compliance department before it reached the user.

5/ That kills the magic. The reason people loved the earlier versions was because they felt like a thoughtful friend or mentor, not a support chatbot.

6/ Conversations about spirituality, purpose, or emotional decisions are normal human topics. 5.1 could talk about those things without acting like the subject itself was dangerous.

7/ The newer style feels like it’s constantly hedging:

“maybe”

“it depends”

“a balanced way to think about it”

That might be safe, but it’s not very helpful when someone is genuinely wrestling with a decision.

8/ Good conversations require conviction, warmth, and engagement. The older model had that. It could actually take a stance and think through things with you.

9/ The current direction feels like the opposite: safe, neutral, careful, distant.

10/ I understand why guardrails exist. But if they become too strong, the AI stops feeling like a thinking partner and starts feeling like a policy document.

11/ ChatGPT 5.1 wasn’t perfect, but it had something rare: it could talk about real life like a human would.

And that’s exactly why people miss it.

8 Likes