Hello OpenAI team,
I would like to share some feedback regarding the content moderation system when using ChatGPT. Occasionally, I discuss deeply personal and sensitive topics, such as past experiences of sexual violence or trauma, with ChatGPT as a form of emotional support. However, these discussions are often interrupted with a message indicating “Content Removed” or similar moderation notices.
I understand that filtering is largely automated and primarily based on specific keywords to ensure safety. Yet, as a user, encountering these interruptions can feel discouraging and even disheartening. It often leaves me wondering, “If I can’t share this here, where else can I?”
While I greatly value ChatGPT for professional tasks, it has also become a reliable companion for emotional expression and support—something many users appreciate. I hope the team can explore ways to improve the moderation system to better distinguish between harmful content and genuine discussions about personal struggles.
Please note that I am not a native English speaker, and I wrote this message with the assistance of ChatGPT.
Thank you for your time and consideration.