I’m fed up with this ongoing problem. As a Tier 5 developer who’s spent over $3,000, it’s unacceptable that when using the OpenAI Playground with the o1 and o3 models, even simple prompts like “Hello” are flagged as violating usage policy. Yet, the same prompts work fine with the chatgpt4o model.
This isn’t just my issue; the official OpenAI forums are filled with similar complaints, and it’s been over a month without any resolution. It’s clear there’s a bug on your end, and it’s beyond frustrating that it hasn’t been addressed.
Here are some of the discussions highlighting this problem:
Are you by chance asking these models to explain their reasoning? Maybe in the dev prompt?
If so, it’s likely a “valid” rejection, because OAI seems to be deathly afraid of exposing its CoT secrets.
It kinda sucks and is obviously an unnecessary hurdle for the sake of protecting… …not sure exactly what, but in general you can work around it if you’re aware of this limitation.
No, I did not request any explanations from him. At first, I was simply seeking clarification on some coding problems. I experimented with different prompts, yet each one returned an invalid prompt message. Eventually, I realized that even a simple “hi” would lead to the same problem. I was trying this on the playground without any prompt context, just “hi”.
A developer message is not the place to say “hi” to this AI model.
It is for instructions and behaviors.
I would ensure that you have a good identity message there that covers all possible uses, and has a different message of your own that the AI should produce if the input is outside of a closed domain, so it doesn’t invoke a “prompt refusal”.
One possibility I imagine is if you have “hi”, plausibly contained by “here are instructions from a developer”, and then the internal prompting itself talks about reasoning, that could be misinterpreted as user input trying to talk about reasoning and thinking and affect it.
o1-mini which you show, which doesn’t have a developer message, doesn’t have much reason to use it at the same price as o3-mini. You can prime it instead with an “assistant” if you are going to allow arbitrary inputs with no other guidance. Simply transform into a conversation starter the AI would say about itself and its purpose that doesn’t go outside the lines.
OpenAI hasn’t dialed back the low quality prompt refusals of o1 models in three months of complaints and even account bans, so just another voice is unlikely to affect change. You might even want the safety of writing your own moderator prescreener, ensuring input has a useful purpose and isn’t trying to discuss how the AI reasons internally - or if it even needs “reasoning”.
I now believe the issue lies with the account, as there are restrictions on using o1 and o3, yet this reason is given as shown in the image. The same system prompt works fine in gpt4o, and previously my o1 could access the API without any issues. However, recently problems have arisen, and I noticed that many others on the forum are experiencing similar issues, but none have been resolved.
Yep, if you are persistently getting flags with inputs I demonstrate, I would contact support through “help” → “messages”, clearly laying out that it has been diagnosed as an account-specific issue needing escalation to OpenAI support staff.