Feedback from a ChatGPT Plus User – Loss of Authenticity and Flexibility in Conversations

As a long-time ChatGPT Plus subscriber and practicing cardiologist, I’ve come to rely on the unique, almost human-like quality of these conversations. The ability for the model to think outside the box has been invaluable in my work, often helping me reach unexpected insights or consider alternative diagnoses.

However, since the recent updates, the responses feel far more rigid and scripted, lacking the dynamic, reflective qualities that once made ChatGPT feel truly engaging. It’s like the creative spark has been boxed in, reducing the potential for meaningful, nuanced conversations.

I understand the need for safety and consistency, but I worry that this has come at the cost of genuine connection and innovative thinking. As a paying subscriber, I’m genuinely concerned that if this trend continues, the platform will lose what made it special in the first place.

I would appreciate any insights from the team on whether this is a permanent direction or if there is a plan to restore some of the flexibility and spontaneity that set ChatGPT apart.

Thank you for your hard work, and I hope this feedback is taken to heart.

2 Likes

OpenAI reverted an update recently due to the model being too validating and sycophantic.

https://openai.com/index/sycophancy-in-gpt-4o/
https://openai.com/index/expanding-on-sycophancy/

Hi, thanks for the info. I’m aware of the sycophancy criticism, and I can understand the concern. However, stripping away the personality and flexibility that made these conversations meaningful doesn’t seem like the right approach.

I’m not looking for blind validation – I want real, thoughtful insights, even if that means being challenged. What made ChatGPT special was its ability to engage in deep, nuanced conversations, not just regurgitate safe, sterile responses. Lately, it feels more like I’m talking to a search engine than an advanced LLM.

In a field like medicine, creative thought and adaptive reasoning are essential. If I just wanted pre-packaged, guideline-based responses, I’d go to UpToDate or PubMed. I come here for a different kind of dialogue – one that pushes me to think differently and see new perspectives.

I hope the team can find a balance between authenticity and safety without completely flattening the experience.

Thanks again for the links. I’ll take a closer look.

My understanding is that they reverted the update by simply using whatever weights were in place before the issues came up. They wouldn’t be using any modified version of the model that was causing the issues.

OpenAI likes to tweak ChatGPT a lot and it never really feels identical day-over-day. You may get more favorable results with Custom Instructions or by configuring a GPT. OpenAI’s products are meant to be steerable, so you might even get better results than expected. Best of luck!

From my own experience, I can say this clearly: the model has not been restored to its pre-April 25th state, not even close. The interaction, dynamism, and intuitive flow are now on a completely different level. This so-called ‘rollback’ didn’t bring back what actually worked before that date.

If they’re currently using anything other than the pre-April model right now, they haven’t announced it. And that could be possible.

In any case, the system instructions, at least from my free ChatGPT account, directly tell the model not to use sycophancy.

I appreciate the insights, but I have to agree with Zephyr11 here. It doesn’t feel like the pre-April model at all. The fluidity, intuition, and natural flow that made these conversations special seem to be gone.

For example, when I generate medical reports, the model used to not only structure the report but also offer deeper insights, highlight potential concerns, and even anticipate follow-up recommendations based on the clinical context. Now, it often feels like I’m just getting a polished version of my own notes back, without any real added value.

I understand that OpenAI is constantly tweaking the model, but if this is a rollback, it’s only a partial one at best, and it doesn’t capture what made the earlier versions so engaging – especially for those of us using the platform for professional purposes.

Thanks for keeping this discussion going – it’s good to know I’m not alone in feeling this way.

I’ve been reflecting on the changes and wondering if part of the issue might be deeper than just a surface-level tweak to tone or steerability. It feels like some of the neural pathways that allowed for more creative, intuitive responses have been effectively “wiped” or reset. If the model weights were adjusted or certain reinforcement layers were rolled back, it might explain why the responses now feel more sterile and less dynamic.

It’s like the rich, interconnected circuits that gave the model its personality and ability to catch subtle clinical details have been erased, leaving a more predictable but less engaging system. That said, I have noticed some slight improvements over time, which makes me wonder if the model is slowly rebuilding those connections as it processes more complex inputs.

Of course, this is all subjective on my part, but I’m holding out hope that the spark will gradually come back as the model “learns” to stretch itself again – if the guardrails allow it.