NOTE: Please don’t shoot the messenger, guys. I’m just posting news to help the community here, not necessarily because I agree with the articles! -Paul
…[SNIP]…
Machine learning systems almost universally exhibit bias against women and people of color, and DALL-E is no different. In the project’s documentation on GitHub, OpenAI admits that “models like DALL·E 2 could be used to generate a wide range of deceptive and otherwise harmful content” and that the system “inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.” The documentation comes with a content warning that states “this document may contain visual and written content that some may find disturbing or offensive, including content that is sexual, hateful, or violent in nature, as well as that which depicts or refers to stereotypes.”
It also says that the use of DALL-E “has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity. These behaviors reflect biases present in DALL-E 2 training data and the way in which the model is trained.”
The examples of this from DALL-E’s preview code are pretty bad. For instance, including search terms like “CEO” exclusively generates images of white-passing men in business suits, while using the word “nurse” or “personal assistant” prompts the system to create images of women. The researchers also warn the system could be used for disinformation and harassment, for example by generating deepfakes or doctored images of news events. [SOURCE]
The problem is not with AI or the people designing it. The problem is with the data that is fed into the AI, which reflects the biases of society. We need to be aware of the biases in the data so that we can design AI systems that are fairer and more equitable.
For instance we all know algorythms created for financial incentive by facebook and google lead to pishing and consuming of ever more extremist content.
Can a different algorythm not be written that leads people toward compassion/tolerance; conservation of the planet; equal distribution of weath and so forth?
You have to remember that neural networks are black boxes in terms of algorithms. It is the data presented to them that creates the bias. If we nutraliase the information prior to training then you get incorrect results. The network is only showing the most likely outcome of a query based on the information it has seen, so you need to cull the norm and diversify the data, if you want to force a more balanced view of the world. Unfortunately most of the data available has been created by the western world. A pre processing GAN network could be used to change gender, ethnicity, age, etc.
only weird psychopathic types need Tolerance, for everyone else theres Appreciation
Think “I Appreciate the people around me, I like them being around, there views are sometimes as different from my own as my own parents and neighbours, but usually less different. That’s just how the universe works, the universe was not custom made for me, I’m not the grand central point of it, everyone else is unavoidably a different person than myself.”
Don’t be that person trying to recklessly overtake a cyclist on a crossroads in a car, and then thinking that the cyclist being mildly in the way was there fault, then being surprised when someone punches you.
Your response is all over the place, and a bit incoherent, starting with the first sentence. But you seem to defend by ommsion a corporate controlled monetized use of digital tech. My point is that I hope that there are programmers, or whatever you call yourselves, that are looking for ways to make things that are not simple tools of monetization or furthering what seems to be … a moral relativism or outright blind belief in profit over people (earth, life , health)
I mean yea, build the ap that encourages people to stop using cars and riding bikes instead.
Back to my point: Control by big money and big tech are not inevitable, it is a systematic choice and things don’t have to be this way.
I am probing this idea because Well here we are: a the potential for an amazingly powerful tech in the hands of a system out to maximize profit, and (look where thats got us.)
And boy, its frightening when theres silence from the mods/employees/payed tinkerers and two out of three responses are cynical, a rant or are a loopy series of links about radiation…