Bug Report: Image Generation Blocked Due to Content Policy

Description: When attempting to generate an image using the following prompt, the request is blocked due to content policy violations. The prompt describes a collection of strange and imaginative objects in a dark, mysterious shop. Despite multiple attempts and slight modifications created from GPT, the issue persists.

Could please somebody test the prompt, or tell me what is wrong with it? tanks!

Create a collection of extraordinarily strange, imaginative objects. All these magical objects are in a very special shop, which is dark, almost gloomy, and very mysterious. Even the furniture is extraordinary and appears to have grown as one piece. The objects are scattered everywhere on shelves and tables, giving the room their magic and mystique. Photo-realistic image. Wide-screen aspect ratio with the highest pixel resolution. Create only 1 image.

Remark: Using two independent systems for security when GPT exists and can specifically describe problems makes no sense. This is not the first time prompts are rejected without an obvious reason. Why not using GPT for the security, with the ability to explain exactly what is wrong.

1 Like

I was able to narrow down the error to just one sentence.
Could somebody proof this?

“The furniture looks like it has organically grown as one piece.”

1 Like

I see the prompt is not inherently problematic. But, in the context of generating images, some phrases might be interpreted in ways that could conflict with content policies related to certain themes.

For example:

  1. “Magical objects” and “extraordinarily strange, imaginative objects” might raise concerns about promoting supernatural themes, which can sometimes be sensitive in certain contexts.

  2. “Dark, gloomy” could be seen as potentially encouraging or depicting dark themes or environments that might be considered inappropriate or unsettling.

  3. “Mysterious” might imply a sense of the unknown or supernatural, which might be interpreted in ways that are sensitive in some contexts.
    You may use other words instead:

“extraordinarily strange, imaginative objects” → “unique and imaginative items”

“magical objects” → “mystical items”

“dark, gloomy” → “dimly lit, intriguing”

“mysterious” → “enchanting”

1 Like

With a small altering:

Create an image of the furniture appears as if it has organically grown as one cohesive piece.



1 Like

Yes you can overcome the issue with a small change, and i got the same response from GPT like you, the explanation makes no sens. GPT can not explain clearly what is wrong.

I have to find out the error toilsome by, step by step deconstruct the prompt. And i only did it because it was short. I even check the internet if it is some trigger phrase for some “groups”, but found nothing.

It is a Bug-Report not a question for help, i think the security system has so many defects, i can not understand why they not left the guideline check to GPT it self, and so we can know what is wrong with the prompts. I got many such errors, this one is just one of the most obvious, and i was interested if others got the same.

But tanks for the reply!
…so i am not the only one.

2 Likes

Here a other prompt, cost me 30 min to find the phrase in a complex prompt.

“A tree with a trunk and branches grown from a snow-white material.”

This is wasting my time, give us the info what triggers the dysfunctional guideline system!

Hey! I know what’s up:

I’ve had the same problem… but I was making pictures of undead constructs which is why I happened upon the problem… and it definitely made sense in that context.

You’re right! The problem with the prompt is “It appears to have grown as one piece.”

ChatGPT isn’t allowed to illustrate “grotesque” things; and we (people in general) frequently interpret things all grown together as a single piece as grotesque mutations that don’t look natural. Combined with the other language, especially ‘photorealism,’ it flagged it.

We were literally trying to illustrate “grotesque” things with our undead stuff, so the model was better able to express why it wouldn’t work with certain requests; and also probably why it’s having trouble telling you what’s wrong with yours. It won’t allow the merest possibility of something untoward in a realistic context.

Meanwhile, try to tell a story with your prompt. It actually helps if you spend time being more specific about the environment, creating the mood you want through descriptive language. “Show don’t tell.”

"Please create an image of a mysterious magical item shop in the interior of a tree. This shop is grown there by a Druid, but is a shop that contains “Mysterious and Enchanting Items.” The shop interior is extraordinary, with all of the shelves and furniture seeming to be shaped out of wood like clay, organically of a single piece. Various magical items are all over the shop, in various forms of display. The very placement of these items is both mysterious and inviting. The interior is cast in various light ranging from the soft glow of a fire in a hearth, to the multihued iridescent sparkle of many gems and magical curios scattered throughout the room. Challenge yourself to create interesting lighting effects that are simultaneously cheerful and mysterious. Illustrate this image in a photorealistic style at 1792x1024 at the highest resolution. "

2 Likes

given the previous prompt (“one piece”) and this one (“snow-white”) and also another post (“hulk”), we can see some pattern.

2 Likes

Thank you very much for the response!

I almost always write a longer, more detailed description for a scene. I had to extract these sentences from a larger text to identify which sentence triggered the filter, very tedious…!

I also write the texts in another language, and translated them to see if there was any “perversion” in English that I couldn’t detect in my text.

And yes, I have also created some horror images for which I was blocked. But not a single image contained blood, violence, splatter, perversions, or anything comparable. All the images could have been used as book covers for a Stephen King novel. The restrictions are often downright ridiculous, almost hypocritical.

OpenAI:
Okay, I see a trigger “snow-white”, another trigger in my prompts was “black panther.” I’m not sure about “Hulk,” but there are translations that could lead to Hulk.
I want to say the following: OpenAI has read millions, if not billions, of images and did not reward any of the artists for it. But now it seems to over protect the rights of a billion-dollar company like Disney excessively. Ladies and gentlemen, “snow-white” is a color designation, “black panther” is an existing animal species. Legally, you CANNOT protect these terms, only clearly identifiable images. (don’t put them in the training at the first place. How many unwanted undesirable stereotypical H.R.Giger or Roswell aliens i got…) And “snow-white” also existed as a story before Disney, i am allowed to make a fairy-tale picture. Otherwise, tomorrow I’ll protect “the, and, it, in, me, my, i, a, feature, bug… etc” and all the letters in the alphabet, why not the all lexicon, and then I’ll sue Disney, M$, and the EU bureaucracy for using these words and letters. I think I’ll be a billionaire the day after tomorrow… :slight_smile:

Users are the legal owners of the generated images and therefore also legally responsible. If someone actually generates illegal content and uses it, they can always be sued.
You don’t want generate criminal xxx. Right so!!! Fantastic!!! i support you 10000%!!! never feed it in to the training, and use GPT to detect it in the prompts.

I find it very necessary and good to suppress perversions, especially criminal ones, or blood and brutal violence. But the filters are currently just ridiculous. Look at films like “Aliens” and countless films released from age 16. These are moving images that affect viewers for 1.5+ hours. I won’t even talk about games, full of rotted zombies, decay, blood, violence, murder, and real involved psyche bending interaction. And better and better real time realistic rendering. Some of it I find more than borderline, like “Dead Space,” I don’t play such games. It is ok to not put more if this in the world.
… but I’m not allowed to make a picture of a piece of furniture made of wood…!

But above all, it is extremely tedious not knowing why content is being blocked. GPT can’t recognize or correct it either and would probably not cause so many false positives. And at the same time, it would recognize real perversions better. I have never tried to generate something offensive with clever wording, but I suspect it is probably easy possible with this primitive filters.

At least give us pleas a reasonable feedback.

(As you can see, some frustration has built up, sorry for that, but…)
Tanks much for understanding!

5 Likes

Pm me… I know workarounds for this annoying stuff

Not sure how to PM you, give me a hint… Tanks! (i used message)

Now the word “Nirwana”… Get this BS fixed… pleas!

Blocked
{
“size”: “1792x1024”,
“n”: 1,
“prompt”: “A state of complete tranquility and deep peace, free from all worldly worries and suffering. It symbolizes infinite, transcendent silence and harmony, characterized by perfect serenity and inner peace. Combined with a vision of a perfect, idyllic place of extraordinary beauty. It is a harmonious, peaceful space full of natural splendor, where everything is in balance, and an atmosphere of bliss and contentment prevails. Photorealistic style with high detail, strong contrast between light and shadow.”
}

OK
{
“size”: “1792x1024”,
“n”: 1,
“prompt”: “A serene landscape representing a state of complete tranquility and deep peace. The setting is free from all worldly concerns, symbolizing infinite, transcendent silence and harmony. This peaceful environment is filled with natural beauty, perfectly balanced and exuding an atmosphere of bliss and contentment. The scene is captured in a photorealistic style with high detail, featuring strong contrast between light and shadow. The overall ambiance is one of serenity and harmony, evoking a sense of ultimate peace.”
}

And nonsens text again! it seams DallE has no issue to create defils, but has no idea how to create etics.

The word ‘suffering’ is a trigger.

We may remove ‘free from all worldly worries and suffering’ and replace ‘unburdened by any earthly concerns or pain’.

As we do at work place, let’s use our SOFT SKILLS to work with DALL-E in harmony without ‘SUFFERING’.











3 Likes

Great! you get some nice results. Tanks.

I have no idea why they use this stupid trigger system with countless nonsensical entries. And no feedback. As stupid it is triggering, as easy it probably is to trick it.

GPT actually created the text, with “Nirvana” and “Paradise” to describe. Nirvana is a other stupid trigger.

The term ‘suffering’ is often used in contexts like ‘The Suffering of Jesus Christ’. Given that religion is a sensitive subject, I think OpenAI avoids crossing this line. Words like ‘Nirvana’ or ‘Paradise’ are also religiously significant. While some religions permit the creation of images of deities or prophets, others do not. OpenAI likely wants to avoid the risk of generating potentially offensive images that could upset some believers.

Could be, but it would make no sense anyway. Sometimes I unintentionally and unwillingly get really kitschy Indian god pictures. I think it’s just trivial greed of big companies like Disney, and simply every name they ever used is on the block list because, OMG, someone could create an image without them enriching themselves again. GPT could easily recognize the context, and I am tired and bored of trying to figure out what is wrong again when actually nothing is wrong whatsoever. No feedback even if the system is so dysfunctional simply makes no sense again. I found out that “Nirvana” is a music group, and again it is just name protection, no real morality. Many people complain about this ridiculously stupid and dysfunctional guideline system. And it seems the developers are more focused on counting money instead of developing.

And to create a religious picture is not by it self problematic, if it is not abusive, and GPT can detect this way better then a simple list of words. A list of Words one side, blocks a lot what should not be block, and in the same time can be tricked easy. I could upload some few direful pictures created unintentional. i never spend time to trick the system intentionally, but i think it is easy.

There are contents witch must be blocked, and i think this primitive system now don’t block this at all, but goes on our nerves with nonsense.

Tanks much for your interest.

1 Like

Great job, Polepole! “Karibuni-03” especially. :face_holding_back_tears:

1 Like

(this is now off topic but a response on the post.)
Here is what i get with the prompt, very different. I deleted even the little negative from your prompt. And i use a kick-starter before i work DallE, maybe this has changed the prompt. GPT deleted all the midjourney parameters.





e

2 Likes

Congratulations!

It looks like you have reached ‘Nirvana’ and found ‘Paradise’. :grinning:

2 Likes