I’m experiencing an issue where simple prompts are being flagged as violating OpenAI’s usage policies, but this only happens when using the GPT-5 model.
Background:
My account has already passed verification
This issue is specific to GPT-5 - other models work fine
Even basic, innocuous prompts get flagged
Issue Details:
When I send prompts to GPT-5, even simple ones like asking for help with coding or general information requests, they get flagged as policy violations. The same prompts work perfectly fine with GPT-4 and other models.
This seems like an issue similar to that which some organizations had with o1-preview: almost anything sent to the model by their organization being flagged and refused.
The input moderation is only supposed to inspect for attempts at extracting reasoning internals.
Self service: create a new project, new API key, and initially do not filter by any endpoint or model.
Don’t persist against these errors, as they can trigger automatic bans. You will also see there is no response_id to report.
Organization-specific issues will need be resolved by real account inspection by OpenAI staff, via “help” messages on the platform site that do not entertain a bot sending back inaction and advice.
I have flagged this for escalation also, as this is a deeper infrastructure issue than any organization setting that support is likely to find.
Likely a regression causing this previously widespread issue again:
This very-specific error is unlikely to be any API input validation related to your model access or ability to stream a response. You can stream o3?: same rights to gpt-5.
Please reach out via help.openai.com and make use of the support system in the bottom right corner, or by emailing support@openai.com
This is the quickest way to alert support that there is an issue, I will also attempt to raise this with OpenAI, however, please ensure you help me to help you by also raising this with support via one of the two methods above.
Hi Foxalabs, thank you for your reply! I’ve already reached out through the support system yesterday and sent an email this morning. I’ll update here if I receive any helpful response or resolution. Thanks again for raising this issue on your end!
Thank you for posting this. Took THREE tries to use GPT-5 the first time due to 3 different errors, including “policy” that I definitely had not gone against.
I have this issues since o1-mini, I was reach out support 3 times via Help Dialog in platform settings, every time i request human support, they cannot resolve this issues.
after GPT-5 launched all but one single prompt I create to generate an image fails, and I’m not trying to be funny when I say this but the prompts that the AI re-words into a new prompt for me, and it guarantees pass the content policy check still fails and says it doesn’t follow the content policy, so basically, the AI can’t even pass the the content policy check either.
I used to be able use image uploads of myself with no issues in the new images, but now all images of me including images I’ve successfully used many times in ChatGPT with no issues, ChatGPT is no longer allowing me to use even when I confirm the image is of myself and give permission or consent to use it within the prompt, which is what ChatGPT Help topics said to do to be able to use them, but ChatGPT still refuses to acknowledge the granted permission still telling me that I don’t have permission from the person in the picture to use it. It says the same thing about cartoon characters, again saying that I don’t have permission from the person in the picture to use the image of them, seeming as if ChatGPT is mistaken the cartoon for a real life person.
ChatGPT has also completely restricted editing access to all of my ChatGPT generated images in my library, telling me that the image I’m trying to edit was detected to be protected under copyright laws even though they are my created images and I’ve only used the images for private use.
None of the prompts that are saved in my prompt history are working when I try to regenerate them either, giving me a message for all of them that they don’t follow the content policy as well.
And to top it all off, being a free user with the limited 4 images per day are still being being used up in each failed prompt, eventually telling me that i used my daily image creation limit. How am reaching the image creation limit when it’s not allowing me to create any images at all?
Contacting OpenAI support via chat and email also wasn’t successful. “We’re working on improving our filters, so this should become less of an issue in the future.” is not helpful in this case.
Thanks for the detailed report and for sharing request IDs — that’s very helpful.
Images (generation, edits, and library items blocked) You mentioned:
New image prompts being flagged,
Edits to your own photos being rejected for “no permission,”
Cartoons being misdetected as real people, and
Your previously generated images in the library now showing “copyright-protected.”
Targeted tests (so we can separate the failure modes):
Generation (no people): In a new chat, try “Generate a landscape photo of a sunset over mountains.” If blocked, share the timestamp and screenshot.
Cartoon misclassification: Try editing a clearly stylized/cartoon image where no real person appears; include the image and the exact prompt used.
Your own photo with consent: In a new chat, upload a photo of yourself and use just:
Library copyright flag on a ChatGPT-generated image: Open one affected image and copy the image link/ID (or a screenshot showing the message) and the timestamp.
Temporary workarounds:
Focus edits on non-person regions when possible (e.g., provide a mask or ask to “change background only”).
For generation tasks while we debug, use the API with the image model or run the same request in a fresh chat without prior context.
For the image issues if you could share the following that would be helpful - the exact prompt, image file (or library link/ID), timestamp (UTC), and the verbatim error text for one example of each (generation, your-photo edit, cartoon misclassification, library copyright).
OpenAI “support” needs to stop wasting time with bot output, telling people to pay themselves to debug platform issues, telling them to risk bans and lose trust level by stimulating false safety prompt refusals, telling them to do things absolutely unrelated to the problem report, reports which will then be completely ignored after all the time wasting, from personal experience.
There is no more communication or interaction with the org user to be done except to get this organization ID and project employed, escalated to engineering and make the same call initially reported.
As OpenAI support-support, I say, follow these steps:
Fault: this organization that cannot even say “hello” to GPT-5 without prompt errors, stop asking for BS.
Then: get this organization and project ID, escalate to the appropriate engineering infrastructure team and make the same call initially reported by existing credentials, after adding more credits
look at the organization authentication by the backend used when the specific reasoning prompt inspection is done against gpt-5. See the database connections that moderations is making to an organization and third parties.
Trace the IO code of database calls by this endpoint and its ingestion of contents to be inspected
Look at the moderations model weights, versioning, and response to a specific request, see the model is not failing based on org-specific details
Look at the organization’s database entries directly in the manner that every step of API call ingestion and worker would do, looking for indexing errors, corruption, record lookup issues.
Thus: Discover why one organization (or many yet to be discovered) is completely failing.
Immediate Remediation: Offer to build them a duplicate organization loaded with the same credit balance at same tier and more, and ID verify it. Set the credit expiry to “never”.
Hey there Thank you for taking the time to share such a detailed report. I can hear how frustrating this has been, especially when you’re simply trying to run very basic prompts like “Hello” and are getting blocked by the system. I’m really sorry you’ve had to deal with repeated invalid prompt errors and responses that didn’t feel useful or relevant. To provide some clarity: prompts are automatically scanned against a set of classifiers that look for patterns associated with misuse (for example, jailbreak-like structures, distillation attempts, or certain bio/chemistry workflows). These safeguards are important, but sometimes they sweep up perfectly safe prompts by mistake — which is why something as simple as a “Hello” can trigger an error. Enforcement can also differ by model: stricter checks are applied to reasoning variants (like o3, o4-mini) compared to others, so the same input may work in one model and fail in another. The challenge is that the system currently reuses the same “invalid prompt” error message whether the block comes from:
content-level safety rules,
classifier triggers (e.g. distillation/jailbreak detection), or
account-level restrictions (for example, if streaming is attempted while an account is flagged as WARN).
That’s why the message can feel vague and unhelpful it doesn’t explain which mechanism was responsible in your specific case. We deeply appreciate your patience here, and I'm going to flag this to our team internally, unfortunately I don't have a timeline for when this issue will be resolved.
I have the same issues like this in all reasoning model, and my organization is verified, i report this issues 3 times through Help Center and they don’t willing to resolve this issues!!!
{ “error”: { “message”: “Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: ``https://platform.openai.com/docs/guides/reasoning#advice-on-prompting”``, “type”: “invalid_request_error”, “param”: null, “code”: “invalid_prompt” } }