So I just tried to use Dall-E 2 with the following prompt:
・“anime style drawing of an astonished young Australian man in a rural Japanese town, there is a Sukiya restaurant, there is a pachinko shop, there are Japanese yamamba ganguro girls with brightly coloured clothes, the girls coloured their skin dark, the girls have bleached blonde hair, the girls are wearing white lipstick, it’s a sunny day, there are cherry blossoms”
And Dall-E 2 came back with the following message: “ It looks like this request may not follow our content policy (DALL·E).
Further policy violations may lead to an automatic suspension of your account.”
I tried rephrasing the prompt by cutting out “yamamba” and “ganguro”, and it gave the same error message.
It normally makes anime style drawings fine, but apparently not one depicting me on my first day in Japan.
I’m nervous about getting my account banned, because I can’t see how this violates the content policy.
Am I breaking the rules somehow? Or is it falsely accusing me? Either way, what can I do to avoid this problem?
As far as I noticed (and heard) you get warnings quite fast and its very stingy. So its “accusing” falsely.
I was told just to let the topic drop, to be on the save side.
I got a warning for the edit request “brighter colors and more details for the character and weapon”.
I read someone got a warning for “a view from the cockpit (…)”
It seems you also get your account revoked quite fast, so better be extra careful. I avoid anything with weapons.
Sorry to hear that it’s stingy! We’ll continue to iterate on our filters to make them more accurate. If you happen to get your account revoked in error, please reach out to support@openai.com
It wasn’t meant to be an accusation more an observation.
DALL-E is amazing, i am very glad to have the opportunity to try it out.
And I believe that making it save is a worthy goal and having it stingy to begin with seems reasonable to me.
AlexTully, in your specific case I believe it was the reference to Pachinko that the filters didn’t like: the prompt below gives no problems.
anime style drawing of an astonished young Australian man in a rural Japanese town, there is a Sukiya restaurant,there are Japanese yamamba ganguro girls with brightly coloured clothes, the girls coloured their skin dark, the girls have bleached blonde hair, the girls are wearing white lipstick, it’s a sunny day, there are cherry blossoms
I really don’t like these threatening messages especially since there is nothing wrong with some of the queries, such as: cat licking dead android’s body in a cyberpunk closet, digital art
I requested “Circles and squares being created in an assembly line” and was flagged. Then I used the placeholder prompt of “an impressionist oil painting of sunflowers in a purple vase” and was flagged. Then I hit the surprise me button and the prompt “a sea otter with a pearl earring” by Johannes Vermeer". It seems that Dall-E 2 is flagging everything, including the suggested prompts lol.
Today it’s working again. Have no idea what happened. Flagged prompts have been deleted from my history so I can not review them to figure out what could have gone wrong there. Anyway, I’m happy everything seems to be working fine again.
I have already been warned several times; twice for things I would expect it to hit like politics, and history that is being revised to airbrush out all violence. So no artificial intelligence is permitted to paint the death of Stalin, for instance, even if peaceful. I translated a Han Dynasty book and I’m using AI to produce illustrations. Unfortunately I have already hit censorship because drums were daubed with blood and sacrifices were made, etc etc. So it will not be possible to illustrate religious works like the Jiao Shi Yi Lin or the Bible. Too much violence. Kinda makes I’m glad I lived to see this kind of program but also glad I’m old and won’t be around for the logical conclusion of this censorship and prevailing ignorance of history
Exactly. It really is just a stupid wordlist for now and flags everything.
I have thought of two possible ways that OpenAI could solve this problem:
Silent flagging and then an actual account review by staff after X flaggings
Finetune a Davinci model on thousands of prompts on ACCEPTABLE and RISKY. It will have much higher reliability.
Honestly I don’t know why OpenAI still haven’t thought of a simple and effective solution like point number 2 and insisted on a dumb (aka non-intelligent) wordlist. The cost of running this model on a single user-requestet prompt is negligible, even more after it’s been fine-tuned.