Is there a list of stop-words?

Hi

Does anyone have a list of forbidden words?
It would be helpfull to get a list and remove those words before creating the prompt. I do auto-prompts, and either the words can be replaced (like black panther with panther) or “Hitler-youth” with “german children”… but its just not fun to try around

I am having problems getting historic correct images, everything from Germany in the time time fram 1938-1945 seems to be unpaintable… ok

Is that tool trying to rewrite history?

If you are getting prompt rewriting and not a concern being reported about content policy and no image, it is because the AI of ChatGPT itself is given a mandate to re-write prompts. This includes things like not referring to artists by name, besides the expected things where the AI will just say “I’m not doing that”.

Prompt rewriting is intelligence-based and not keyword-based.

Content policy denials are more about keyword activation.

Telling you exactly how to work around filters to get the negative images you want is likely not high on OpenAI priorities.

Well… I dont know exactly what you mean by “intelligence based”, but re-writting my prompts worked fine.
Hitler just becomes “a Führer”…worked
Faschismus…I wrote patriotism (makes no difference in the time frame and symbols)…worked
and rewriting “traces of blood” to “droplets of blood” also worked.

So I see not a lot of “intelligence” there, I see simple “blocked words”.
I also see no “intelligence” as the images I create are harmles “teaching material”, so the surround is like “children in a classroom are teached about National socialism times and a book about NS time is presented”. If it would be intelligence, then it would realize this is harmless.

2 Likes

AFAIK DALL-E has its own mechanism to prevent misuse. However you can use the moderation API to screen a prompt before sending it to the API.

1 Like

These a combination of explicitly forbidding a certain combinations of words for generating images. Also a certain combinations of three or more words within a certain amount of words that are all likely to trigger a cautionary warning about the type of content that it might produce. It is not an explicit list and that is just a basic explanation of some of the implementation efforts that they put into place when they made the dolly image safety implementations and guardrails. After reviewing some of the questions you had about specific prompts, I have come to see that you are using several of those different types of forts on a regular basis and different ways, and I think through a translation context mishap that is occurring where you say things to describe, for instance, an older woman’s adult daughter in the same scene where you discuss that they are in a bathroom. Then you mentioned body language, their skirt, any top and the combination of those words and in the order that you sent them could be misconstrued by them all. To make it think that you’re talking about adult things happening to a daughter in a bathroom who was wearing a a skirt and Tom and it makes it seem like it is talking about adult activities that are crying inside of a bathroom with an older, a woman and thinks of that nature. You should work on the placement of your adjectives and the way you’re describing things. Instead of saying, for instance, an older woman and her adult daughter, you could just say an older woman and a younger woman than the older woman and you wouldn’t have to use the word adult where daughter which implies a young woman and adult activity. Also you want to avoid terms that make it sound like you’re explicitly trying to take from copyrighted materials. For instance, you wouldn’t want to say things like she looks like Elsa from Frozen. You would want to say things like a person who looks like a princess from a fairy tale who had ice powers.

Thanks.
I am not writing those prompts by hand btw, those are scenes from literature from the 20th century, most for students. Elsa is a german name from the last century, so it appears in Literature often. I did not even know it is any Disney character, but hopefully A.I. is clever enough some day to learn there is not just one Elsa on that planet. How is Donald dealing with that Situation? (the future US president, is he sued by Disney soon? :sweat_smile:).

So far I was able to remove all words that trigger a blocking… Adolf Hitler became a caucasian leader in military uniform with a chaplin beard for example, works so well :wink: (analysing Hitler or Hitler related caricatures in historic literature is very common, its hard to get around that person.

Even when I asked chatGPT what makes dalle block this image creation and if there is somwthing sexual in the Scenen it said it is fine, there is nothing :slight_smile: Seems like chatGPt is more clever than Dalle :wink:

BTW, yeah its a little bit frustrating that German words like “erwachsen” , which do Not have any sexual content become adult in english, where this language has developed a sexual connection to this word.

Grown up would be the correct translation.

Adult scene would not translate into erwachsenen scene in german, you need to say sexual content…otherwise (grown up content) would not be very specific…

1 Like

Have you tried to experiment with alternative ways to translate your German content first other than relying on GPT for it and see whether the translation is more in line with your intentions?

Wie sagt man so gern: Viele Wege führen nach Rom :wink: