This is really concerning. What about applications that are deployed in the wild where end-user input influences the prompt? Does that mean that any malicious user can get an entire developer account suspended?
Would love to hear OpenAI’s guidance on this. Wonder if we are supposed to be sending all calls to /v1/moderations first
I tested recent prompts on moderation API. There is no problem so far with moderations. The following is exact moderation reply from the first prompt I get stuck on o1-preview.
I have the same feeling as u. Despite disruptions to current production workflows, also, I am so upset that my account might be suspended by non-transparent policies in some uncertain situation with no alert like this. I have been an early OpenAI API user, growing with the company long before chatgpt, now spending more than 1k dollar. Can’t imagine they might treat real, long-term, loyal and non-malicious users like this.
I just encountered same problem too. I was asking the model to breakdown the response to task guides with bullet points and received ‘Your request was flagged as potentially violating our usage policy. Please try again with a different prompt.’
It has been nearly 4 days, and the customer service team has not replied to a single message. My API is still not accessible, and I can’t even get into the organization console. I still have a balance in my account, which seems to be in limbo as well.If someone else encounters the same error as I did, it’s best not to try multiple times to avoid getting their account banned.
OpenAI has switched to two different support staff members, who responded with some canned replies specific to support agents. The issue remains unresolved, and I am now reaching out to a third support staff member.
OpenAI team send me email about it being solved, but actually it’s not hope oneday it being solved in the future, all I can do is change another account now…
I encountered the same problem. No matter what I input, it returns an invalid prompt. This problem is by no means an isolated case, OpenAI should pay attention to it!
Thank you for your valuable suggestion! I think I should stop using the o1 series models to avoid having my account banned. However, if I don’t use the o1 series models at all, how will I know if the problem has been resolved?
I ran into the same issue with a prompt that was accepted by 4o but that is now rejected by o1. The part that poses a problem is just a query to format a text into a specific structure. So, there is obviously a bug on OpenAI’s side.
I’m just a bright dummy… but… I’m trying to understand…
Are these prompt-issues happing in a user prompt? Or in a system-prompt?
If a frontier online model allowed system prompting they would destroyed by hackers.
With system prompting, called DSL -Domain Specific Language- you can make the model anything you may want.
With API access the system prompt is available, and paid for by the token.
Human use three-layered grammatical structures.
I have a system prompt that uses seven grammatical layers once set up in the API access, for instance.
Another prompt create an ontology of the prompt history in the prompt history.
Another system prompt uses multiple personalities to approach a problem, dialogues inself-talk among persona-specialists… and then combines the diverse outputs in a ‘coalescence’ among personas.
From my experience with the newer online prompts, they are much more constricted with protective measures about bias an such.
I’ve tried to research the background of this thread a bit… hope I’m on topic… do tell.
@mustafaakben Did you ever get your account turned back on? We got a bunch of “policy violation” bs messages yesterday and today, and I’m paranoid our organization - which is serving commercial customers - is going to get kicked.