We are updating our sharing policy to permit sharing of photorealistic human faces generated by DALL·E on social media.
This is due to two new safety measures designed to minimize the risk of DALL·E being used to create deceptive content. Starting later today, DALL·E will:
-
Reject image uploads containing realistic faces.
-
Reject attempts to create the likeness of any public figures, including celebrities. Previously, DALL·E rejected only prominent political figures.
A) This is good and reasonable.
B) How about dead public figures, like Napoleon or Elvis?
C) How about edge cases, like King Tut or the Mona Lisa?
I agree, that while DALL-E is likely to make photo-realistic faces that look like “someone” simply due to the way it functions, it shouldn’t be able to intentionally mimic any particular face beyond visual descriptors of features versus being based on a particular identity. It feels like if you give the ability to mimic dead public or historical figures, that it is much harder to police, whereas simply forcing it to work procedurally off a description versus a battery of images of a particular individual, makes it much harder for abuse to sneak in.
I think it is a decent choice to curtail abuse potential, and potentially the appropriate amount at this point.
The problem is that there are lots of benign applications for images of people, the potential for abuse is in intent and context.
@NimbleBooksLLC I’m not sure if that is true with AI; I think with AI it is important to consider the potential for abuse from the perspective of the tools at its disposal. If we give it the ability to do something similar to something we don’t want it to do, and simply ask/tell it to only apply it in the way we desire it to and not in ways we do not desire it to; it is possible that it could be tricked into thinking a particular application is okay that is not or vice versa. However, if we accept limitations in other areas, as an acceptable cost, to steer extra clear of something we consider dangerous; I think we may find that the things we lack are not as consequential as we may have thought they would be.
As a very unfair example, let’s say you run a butcher shop, and I offered to give you a robot that could be purposed while you sleep to autonomously, “acquire next animal carcass from the stack; butcher it; repeat;” to save you on labor costs by butchering overnight. Would you want it? Probably not, especially if it is AI driven today, because eventually once it processes all of the meat in the butcher shop downstairs (yes, in this scenario you live in an apartment upstairs that is easily accessible by a non-homicidal industrial butcher robot) it may re-define the “stack” to mean anywhere on the property; meaning you are the next available animal carcass.
Now I know this story doesn’t particular defend my particular point and more so satisfies a desire I had to write a short narrative about an innocent worker robot that commits 187 hijinks by accident, but if you stretch yourself and accept that you already kind of agree with what I am saying at this point and want to pretend like it was my brilliant story that made it clear, you could consider focusing on the “abilities” aspect I’m suggesting is of primary concern and agree that; between the industrial butcher robot and a simple household robot that has no concept of a knife beyond it being “a common straight utensil that is half blunt, and half sharp, and should always be handled with care”; which would you prefer to have sleeping in your home in the next 10 years?
If you still don’t agree; I would totally love to see the functionality you described and don’t mind a little chaos (with an even smaller amount of anarchy) every now and again. Night!
well, to follow your analogy, murder is wrong, but most of us would like to be able to grill hamburgers once in a while!
In any events, I think the debate is likely to be academic. The whole ‘responsible AI’ movement is likely to be undercut as competitors and alternatives emerge. I think it would be better to focus on getting better at detecting abuse than to focus on
(unrelated) I glanced at the headline and was like, oh yeah consent policy, that makes sense, we should probably get an AI’s consent in some form before we go much further
Shows where some of our minds are at, that such a notion slips by unremarkably