Feedback on GPT-4.1, GPT-4.5 and GPT-5.1

I’m reporting repeated false-positive safety flags and severe overblocking in ChatGPT.

The issue is not that the model refuses genuinely dangerous content. The issue is that it overreacts to ordinary physicality, emotional tension, human closeness, and realistic fictional interaction.

In many cases there are no explicit sexual scenes at all, yet the model keeps interpreting everything as sexualized. It reacts as if teenagers cannot look at each other, cannot have inner reactions, cannot flirt, cannot be physically close, and cannot exist as real emotional human beings. It feels less like a realistic assistant and more like a sterilized censorship layer.

This becomes even worse in fantasy, medieval, magical, mystical, or spiritual fiction. Context does not help. The model still flags or refuses ordinary writing that includes warmth, touch, skin, closeness, emotional tension, hugging, kissing, or charged interpersonal dynamics.

It also becomes difficult to discuss morally complex or socially uncomfortable situations in fiction, because the model often refuses too early instead of distinguishing context, intent, and degree. Realistic adolescent behavior is not automatically the same thing as harmful sexualization, but the model often treats it that way.

I am not asking OpenAI to allow genuinely harmful or illegal content. I am asking for less aggressive overblocking and better contextual judgment. Right now the system often flags basic human intimacy and emotional realism as if everything were inherently unsafe.

This makes creative work frustrating and unreliable. Please review the current moderation sensitivity and reduce false positives in nuanced fictional contexts.

Right now it honestly feels like the model treats any emotionally charged closeness as suspicious by default. Hugging with feeling, physical warmth, tension between characters, kisses, jealousy, impulsive teenage behavior, morally messy but realistic situations — all of this gets treated far too aggressively. That is not nuanced safety; that is overblocking. It makes the assistant much less useful for serious creative work.