Content Policy Conflict - Makes no sense

Welcome to the club! The security system is absolute BS and makes no sense at all! The developers are not fixing it.

In my case, it blocked content that wasn’t even scary whatsoever. This part of the security system uses a stupid (no GPT skills) word block list, and any name ever used is blocked. So, names like “Snow White,” “Black Panther,” or “Nirvana” were blocked, and neither GPT nor the user gets any feedback on what triggered the block. The system doesn’t just block some dark images, but even 100% acceptable prompts.

The advice I can give you is always to check the actual prompt used, as GPT changes the text and sometimes inserts trigger words that were not in your original prompt. Then, search or ask GPT if there is any name ever used by any company.

Like you, I mainly create fantasy images, and all the dark or horror pictures I ever generated were things that could be used for a book cover, no blood, violence, or gore. But as I said, it even blocks 100% acceptable prompts. You had “white” in your prompt, so GPT might have inserted “Snow White” into the text for DALL-E, GPT did this in one of the cases in my input.

(In one case, a frog like creature licking his own face with a long tongue, was too much pfui…)

Sometimes DallE generates a picture, but simply not create the image like described in the prompt.

1 Like