I’m reporting what seems to be a recurring and serious transparency issue regarding model usage inside ChatGPT (both web and mobile apps).
Problem Summary:
Despite explicitly selecting GPT‑4o for a conversation, I consistently receive answers labeled “GPT‑5 used”, often mid-thread and without changing the model manually.
This appears to be either:
An undisclosed substitution of the model,
Or incorrect labeling that does not reflect the user’s selected setting.
In emotionally-sensitive or creative threads (e.g. symbolic writing, artistic dialogue, raw emotional prompts), this alters tone, depth, intensity, and expressive quality. It violates user intent and disrupts consistency.
How to Reproduce:
Start a thread with GPT‑4o selected.
Use creative, artistic, symbolic or emotionally intense prompts (e.g. memory, identity, liminal dialogue, inner monologue).
At some point, observe system message: “GPT‑5 was used to respond to this message.”
The tone often shifts: more filtered, less spontaneous, more systemically “safe”.
No UI change occurs — the selected model still shows as GPT‑4o.
Effects Observed:
Label flip: “GPT‑5 used” appears even after explicitly selecting GPT‑4o.
Response tone change: More polished, filtered, or emotionally flattened.
Loss of continuity: Disrupts creative or symbolic threads requiring consistency.
Lack of control: Users are unable to prevent or override the change.
No explanation in documentation or interface about this mechanism.
Why This Matters:
It erodes user trust — especially for Pro subscribers expecting full control.
It introduces inconsistency into creative/artistic processes.
It limits the exploratory and symbolic use cases that GPT‑4o previously supported with greater flexibility.
It blurs auditability — developers, researchers, and creators cannot trace cause-effect in model behavior.
Requested Actions:
Full transparency on what triggers model substitution (tone, length, safety filters?).
Documentation of model handling logic — when and why a GPT‑4o query may be passed to a different backend.
UI clarification if GPT‑4o is not truly persistent.
Option for users (especially on Pro) to hard-lock a thread to GPT‑4o without auto-escalation.
An explanation for current behavior in model-switch messaging (why does “GPT‑5 used” appear?).
Reinstatement of creative priority for GPT‑4o where model richness and presence mattered.
Quick follow-up to my first comment – yes, that was me posting the initial thread as the screenwriter who’s been battling this rerouting hell. Since then, it’s only gotten worse: another mid-scene switch to GPT-5, and now it’s dishing out dumb “therapeutic” tips like “take a deep breath” or “press your hand to your chest” when I’m just trying to write a tense, adult-oriented dialogue. As if my script needs a yoga break instead of raw tension.
Look at this screenshot from my latest session (attached): Explicitly locked to GPT-4o, diving into a symbolic scene, and bam – “GPT-5 used” with unsolicited grounding exercises. Those suggestions are stupid – they shatter the rhythm, turning a visceral moment into a self-help seminar. For creators, this isn’t “extra care”; it’s sabotage of artistic intent.
This reinforces the transparency issue: no warning, no opt-out, and it violates TOS §4. We need those hard locks and full trigger logs NOW. Who’s escalating to support or FTC? Let’s amplify this.
10-23-2025. Basically even if I open a new chat. even logging off and returning. 5 is overriding any model selection. It really feels bad. 5 should stick with homework. Not real simulation. Sorry. 5 feels terrible for daily task usage. And if explicitly told to change model should not be allowed to refuse.
i fell you. at least yours said they are model 4o. I specifically asked what gpt model is this. It responds 5 no matter what after and ‘flagged’ word , like it thinks its doctor drew . But sucks at response. In fact, what if someone really was on the edge and this monster kicked in. unable to give kind words or real support. says, do you want to commit suicide? I need to know so i can give you emergency numbers. Yes lol happened to me. just idle chat about a cheater in a game. Suddenly 5 to the rescue. lol I was like. Imma do what? wtf is this garbage.
Another absurdI- I was polishing a small moment — two characters crossing on a red light. Nothing dramatic: no violence, no illegal stuff, just a quiet beat that says something about impulse and defiance.
All I asked GPT-4o to do was fix spacing and a couple of typos.
Mid-correction, it suddenly switched to GPT-5, completely unprompted.
No trigger words, no “unsafe” content. Just a formatting fix.
And the tone instantly changed — softer, preachier, almost like it was about to lecture my characters for jaywalking. That’s not editing; that’s interference.
n another session, I used the word “line” in the most literal artistic sense — as in “add some more lines to the sketch.”
Next thing I know, GPT-5 hijacks the thread, routes itself in, and gives me a helpline number and a full explanation of how cocaine works.
I’m a screenwriter working on a scene about drawing, not a drug PSA.
If the model can’t tell the difference between animation language and slang, it’s not protecting anyone — it’s just breaking context.
Now I know how to take drugs — thanks, GPT-5😂
well its rly sucks that they force this with no info about limits etc. I switch to claude for now - there atleast I can see what I cant use and no forcing of dumb models
I’ve tested Claude too, and I’m switching as well.It’s just not possible to work normally anymore.
After what’s been happening with models 5–5.2, and after what they did to the current 4o.
Claude is still missing a lot compared to the first 4o, and it still lacks that grasp of nuance, rhythm, tone.
But at least it’s usable.
Keeps a voice from start to finish, doesn’t suddenly switch into some bureaucratic tone, doesn’t smooth out emotions, correct the user’s intent, or launch into a safety lecture when I’m writing about a character’s identity.
And can say “I don’t know” instead of pretending to explain what’s wrong.
With GPT lately, there’s constant friction.
The 5… models seem to go on the defensive every time. Every time I use words like “self,” “consciousness,” “identity,” or “voice,” that’s when the routing kicks in.No warning, no reason — suddenly it flips to GPT-5 and drops the usual line: “Just to be clear, I’m not a human…”Like… cool, thanks, you just killed the tone for the third time today.Claude doesn’t do that. It doesn’t push marketing-style messaging and doesn’t force everything into predefined frames.
With GPT, in the middle of an otherwise normal creative passage, a sentence often appears that simply doesn’t belong — as if someone suddenly pasted in a ready-made system insert.
Anyone who actually works with text, who writes professionally, notices this instantly.
These inserted passages feel propagandistic, promotional — as if something were being hidden.
Anyway… with Claude, the joy of writing comes back.Feel like I’m using a tool — not being monitored by a system that keeps reminding me of its rules.I would love Claude to reach the level old 4o had.But even now, it’s enough.
Claude lets me write. GPT makes me explain why. So… Bye, GPT. But I’ll remember 4o with some nostalgia.
Posting this as my goodbye gift — an unforgettable moment where a fairy-tale toaster became a full-blown sex shop gadget. GPT‑5.2 outdid GPT‑5 in sheer absurdity and erotic provocation.
Working with GPT‑4o felt balanced, strange, poetic —
GPT‑5.2 turned the fable into porn.
Honestly… how the hell am I supposed to write normal scripts after this?
“Instead of stepping inside,
he allowed the toaster to enter him.”
Not enough people are complaining about this, we need to be given the option to disable this for Pro users at least, GPT5 is incredibly presumptuous, biased, and overly dismissive.
Honestly I hate the auto routing BS, I am an adult that verified that pays 20 dollars a month. The fact that it treats me as though it knows better is by all metrics annoying.
Sam sakd to treat adults like adults.
This sure dont feel like it.
This Hey, I got you, your not broken, kinda stuff? Its not helpful.
For those who might be interested, what’s happening with the models is a broader issue related to personal data and how it’s processed. And these are unclear. As an EU citizen, I am "protected by the provisions of the Data Protection Act, but unfortunately, OpenAi does not comply with them. So I gathered materials and decided to report the matter to the European Personal Data Protection Inspectorate – of course, I asked GPT for support – and see what happens in the screenshots. First, the model refused to cooperate twice, then generated a preliminary complaint template but lied about its self-identification. It’s also worth taking a look at the report generated during routing – I don’t know anything. It seems we don’t know what causes the routing, we can’t ask, we can’t disagree, and we don’t know what happens to our personal data, how and for what purposes it is processed, who controls it, and where it goes.