Hello,
I’d like to raise concern about the recent tightening of content filters (noticed around October 7th, 2025).
Since that date, ChatGPT has become overly restrictive toward any mention of violence, sexuality, or moral tension — even when used strictly in a fictional or narrative context. The model now blocks or sanitizes sentences that previously worked perfectly well in storytelling or role-playing scenarios.
The filtering system no longer distinguishes between fictional description and real-world intent.
This has major consequences:
Internal monologues (“I’m going to kill them!”) are censored as threats.
Dramatic ellipses and emotional scenes are neutralized.
Game Masters and writers lose narrative control mid-session.
Stylistic coherence collapses, since we must constantly rephrase to avoid opaque triggers.
In trying to prevent misuse, the system is suffocating adult creative expression.
This isn’t about morality — it’s about context and nuance. Darkness, tragedy, crime, and passion are all legitimate components of literature, drama, and art.
I urge you to consider:
Adding an “Adult Creative” or “Mature Fiction” mode for verified users.
Implementing context-based filtering, not word-based blocking.
Providing transparency when messages are blocked (clear explanations).
Allowing users to temporarily disable narrative guardrails in creative or role-playing modes.
I’ve been using ChatGPT for a long time for writing and game design, and these new restrictions break the flow and freedom essential to those creative processes.
Please consider evolving the system in a way that protects users while still respecting the freedom of responsible adults to create.
Sincerely,
Ledrake, a creative user