Critiques on Moderation Policy for Fiction Writing

I am writing to provide detailed feedback regarding ChatGPT’s moderation policy as it relates to fiction writing for mature audiences.

1. Understandable limits

I fully support the need for clear boundaries regarding content that is widely considered unacceptable—extreme violence, explicit sexual acts (18+), or inhumane scenarios. These limits are reasonable and necessary.

2. Inconsistencies and over-conservatism

Since the late 2025 update, moderation has become inconsistent and sometimes illogical:

  • ChatGPT frequently rejects depictions of moderate violence, injuries, or realistic consequences, even when the story context clearly justifies them.

  • In some situations, similar or more intense scenarios are allowed, giving the impression that only the AI determines acceptability, rather than following what is generally tolerated by the public.

  • The model increasingly struggles to differentiate story types—children’s fiction, teen-oriented fiction, and darker mature narratives—forcing mature stories into child-safe tones and undermining narrative realism.

  • Some content is rejected solely because it may be disliked by certain groups, not because it falls outside public tolerance.

3. Shift in moderation orientation

Previously, ChatGPT seemed guided by what is generally tolerated by the public. Recent behavior appears to prioritize universal comfort over context, which over-restricts creative writing unnecessarily for mature audiences.

4. Suggested improvements

To better support writers while maintaining safety:

  • Adopt context-aware moderation, distinguishing between age-appropriate story categories.

  • Allow depictions of violence, injuries, and realistic consequences that are generally tolerated by mature audiences, including moderate graphic elements, as long as they remain socially acceptable and realistic.

  • Increase consistency and transparency in moderation decisions.

  • Consider letting users specify intended audience and tone as parameters to guide moderation.

I believe these adjustments would allow ChatGPT to remain safe while giving writers the flexibility needed for realistic and mature storytelling.

Thank you for your attention and consideration.

2 Likes

“… the rollout won’t happen unless and until the company’s new age-prediction AI model proves effective. OpenAI is currently testing the AI’s ability to figure out when someone is under 18 …”

Well, I think it’s safe to assume that anyone under 18 is a free user. Therefore, OpenAI can offer adult mode only to paying users.

1 Like