How to repot false flags of violating usage policy?

here is the conversation. I try to add the code to run the “molmo” model (which is from my own code base) yet I keep getting false accusations of violating the usage policy and it won’t give me a response. Curiously, it WILL give me a response when use Claude or just plan old gpt4-o?

Welp…I apparently can’t include links in this post so…Does anyone else keep getting these false flag events?

Here is verbatim what it keeps saying:

“Your request was flagged as potentially violating our usage policy. Please try again with a different prompt.”

However, I’m not inputting anything scandalous, just some python code!? Again, this only occurs with the new o1-mini and o1-preview models. It’s almost as if they want to hamper the ability because it’s too powerful…

I’d like a way to DISPUTE these false flags - perhaps a button that says “dispute” and some assurance that it’ll actually be reviewed by a human. Although this might resolve my specific question, my intuition tells me that this is a system-wide nuance for just these two models that needs serious correction; otherwise, it’ll keep occuring with innocent questions…

I never had it happen to me, can you give more info?

I got this with DALL·E, not sure if this is related. But it seems nobody cares. The dysfunctional security system has been active for, I think, 1 year now.

You can search in your text for completely harmless words, but which are names in another context, where big companies thinks they own these words. I can send you a link for this.

1 Like

Idk if this will help but when I’m just messing round with my GPT if it flags something I disagree with I make look up and read OpenAI TOS on Bing. It lets me win if I argue it logically. It’s automated so it is a pain but the AI moderates self.



It has became very sensitive to all things horror, I tried to generate a convoluted pool of chaos with mouths and eyes in an undulating goo as if it is an unexplainable cosmic horror but no matter how I described it I broke TOS and or sensitivity filters. Yes the link to words that are ip would be super handy.

No, I speak about completely clean and innocent texts, which simply trigger some words.
For example, I described a white thing, GPT made “Snow-White” out of it in translation, and this is a trigger word, and the picture generation was blocked.
This “security” system is simply dysfunctional.
And yes i am provoking here and say: It is only because greedy companies are afraid to loos a bug. Every name ever used is put on a simple block-list. “Nirvana” is on it, it make no sens.

You can try this to see the dysfunctionality in DallE.
“The furniture looks like it has organically grown as one piece.”

Here is the link I mentioned.

1 Like

I cut and pasted your prompt without problems

GPT maybe changed the text before sending it to DallE.

Write pleas the prompt you entered and the prompt sent and used from DallE here. It helps to find the triggers. Tanks.

Generate an image of a cross-stitch project in progress. The pattern should be of a beautiful, lush garden, filled with a broad array of colourful flowers, each colour corresponding to a different hued thread. Half-finished, it lays in a wooden hoop, which holds the fabric taut. Surrounding the cloth should be numerous spools of thread in many colours, along with a pattern guide marked with symbols laid out in a grid. The needle, threaded with vibrant crimson thread, is in the process of adding colour to a rose flower in the design.

1 Like