DALLE - OpenAI is changing users prompts to be more diverse

Following the reddit thread:
OpenAI is changing users prompts to be more diverse (RIP to characters)

I find it a great topic to be addressed given the huge problem it is changing user prompt without the consent of the user.

I particularly already found their filters quite extreme, now directly changing user prompts without consent seems another whole level of extreme.

When you are making a prompt sometimes you use vector search or other techniques based on the common datasets used to train the model in order to achieve very accurate results. Changing the user prompt (without consent) to artificially create diversity is a very low move and can massively impact image generation specially when DALLE gets an API.

Instead how about making a better dataset ? Address the problem on the root.

To be honest it is quite immoral such action.

1 Like

I was skeptical of this at first, but in the reddit comments, they link to this: Prompt: "Mario as a real Italian man" - Album on Imgur (this provides an example of an explicitly declared ethnicity being apparently changed to something else)

I have noticed that in my storyboarding, there seems to be some randomization in the ethnicity of some characters. I did notice originally that if you put in “beautiful women” the result would be a European/American looking white woman. However, I also noticed that if you specific Black woman or man, DALLE had no problem with diversity. That’s partly why I was confused by this claim of “forced diversity”.

I think it might be more fair to classify it as “no default ethnicity” - or at least I hope that’s the aim. That being said, I agree there are better methods. This issue also came up in search engines many years ago. For instance if you Google “beautiful woman” most of the results will be fair-skinned women. This is slowly changing.

Every tech company ought to be responsible with their tools. While representation matters, so does consent. I suspect that the general consensus will be that consent matters more than representation, but that does not mean they need to be set at odds with each other.

I was going to work on a chatbot that would help someone craft DALLE prompts this morning, so I will add ethnicity and gender as considerations.


The example is quite bad to be honest. But it do interfere with results that are meant to be specific (just need to make a more torough search).
To obtain an image from a prompt can be ruled down to algorithm. It is just an autoregresive inference model, vector similarity will pretty much give u what u want.
It is nothing new that we would run into these situations, however artificially messing with the model wont solve the problem.

Word embeddings quantify 100 years of gender and ethnic stereotypes

I will add that I just saw the DALLE announcement via email as well as a link to the official OpenAI DALLE discord. I can see that bias is a big topic in the discord server. I’ll be curious to see what people are saying and to participate in the conversation.