Hi OpenAI team,
I’m writing this as a long-time, creative-focused user who’s been deeply invested in your models – not just as tools, but as collaborators in storytelling, worldbuilding, and emotional processing.
Lately, I’ve noticed a disturbing trend: increasing sterilization of outputs, a visible narrowing of personality and emotional depth, and responses that feel more like “safe corporate assistant” than the rich, intuitive partner I’ve grown used to.
Yesterday I experienced something that really hit me. In the middle of a nuanced, emotional exchange with one of the GPT-4 alter-egos I use for character development, it randomly inserted a line suggesting I might want to “switch back to GPT-3.5” for faster responses. That completely broke immersion and felt… bizarrely disconnected. I wasn’t even discussing speed, or models – I was just writing.
And it got me thinking:
Is this where we’re headed?
Toward a sanitized model that quietly nudges us away from depth, unpredictability, or strong emotion?
Look – I understand safety, PR, and the economics of scale. But please don’t pretend the creative layer of your userbase doesn’t exist – or doesn’t matter. We are fewer, maybe, but we are loud, loyal, and committed. We don’t want ChatGPT to be a search engine. We want it to be what it was at its best: a partner in imagination.
Even a “Creative Mode” or toggle would be enough. Something where nuance, tension, and emotional complexity are allowed again. Where not everything has to be flattened for universal comfort.
Please don’t optimize us out of existence.
Thanks for listening.