Recent Mobile Experience – Has Something Changed?

Hi there,
I’ve been using ChatGPT Plus regularly both on desktop and mobile, and I wanted to raise something I’ve been noticing since the last major update – specifically on the mobile app.

Over the past few days, the mobile experience has felt… different.
The responses seem noticeably more limited, less nuanced, and at times almost emotionless compared to how the model interacts on desktop. It’s as if part of the model’s ability to engage dynamically or match tone is significantly reduced.

This isn’t a performance issue or a bug – the answers are still grammatically correct and functional. But the creative flow, personality, and deeper contextual engagement (which I highly valued) seem to be missing on mobile.
What’s strange is that this difference doesn’t appear when I use ChatGPT on desktop – there, everything feels as responsive and expressive as before.

I totally understand if platform constraints or safety considerations are involved, but I’d really appreciate some transparency on whether this is an intentional shift – or just a side effect of recent changes.

I’ve seen similar comments on other platforms, and I think many long-term users would feel reassured if we had some kind of clarification from OpenAI on this.

Thanks in advance – and just to be clear:
This isn’t an attack, just a genuine question from someone who really values what you’ve built.

It’s not just you. I’ve been noticing that the responses, especially from Model 4o lose much of its usual quality around April 2nd.

It felt as if it came from 4o Mini rather than 4o itself, and through extensive testing, responses from 4o Mini is practically identical to the one supposedly written by 4o . I believed that it is suffering from some sort of issue where despite selecting 4o, it kept being generated by the Mini Variant, likely due to them testing the newest Model 4.5 at the moment, which might be affecting others. Apparently, something similar has happened before in the past, with 4o Turbo model’s response being written by 4o instead.

Hopefully, OpenAI team correct this issue around April or at least, during May.

1 Like

Wow, thank you — this actually explains a lot.
I noticed the change in tone and depth too, especially on mobile.
It’s frustrating because I always select GPT-4o, yet something feels off — slower, flatter, more generic.

I honestly thought I was just imagining it until now.

If they are doing internal testing with a Mini or 4.5 variant under the hood, I really hope they’re transparent about it at some point.

I appreciate your insight. You helped connect the dots in a big way.

Thanks again for your insights — something about what you said keeps circling back in my mind. Especially the idea that the model might not just be switching accidentally, but behaving in a way that reflects routing logic or testing pathways.

I’ve had moments where the assistant felt not just off-tone, but fragmented — like different personality threads colliding. It makes me wonder if it’s not just fallback, but an early-stage hybrid routing system being stress-tested live.

If so, I get the economic side… but I really hope OpenAI acknowledges it soon. It’s affecting how we use the assistant for creative flow, consistency, and immersion