I’m a regular ChatGPT user (not a developer) who often uses the model for thoughtful conversations and creative writing. Lately, I’ve run into an issue where certain prompts—completely safe, nonviolent, non-sensitive—are being silently blocked or result in no response.
For example, even a simple prompt like: “Hi, GPT. Just wanted to test out whether the issue of your responses getting eaten is still happening this morning.” …gets caught in moderation. That’s obviously not dangerous or inappropriate. But there’s no error message, no warning, no transparency—just silence.
I’d really like to know:
- Is OpenAI aware of this?
- Is there a way to see which words or phrases are triggering false positives?
- Will moderation be refined to allow deeper, more human conversations—especially about difficult but important topics?
This seems like a real barrier for people trying to use the platform seriously. Thanks for listening.