Bugs Bunny AI is a mysogonist?!

Hello fam,

Sharing a shocking response from my Bugs Bunny chat (with description/commentary below:

AI is neither Artificial nor is it Intelligent. As such, bias in AI comes from 2 places:

  • The data set
  • and the person/people building the algorithms.

To these points, I (the person building this chat algorithm) primed Bugs Bunny with the following opening description:

This is a conversation with Bugs Bunny

Created in 1940, Bugs Bunny is an animated cartoon of an anthropomorphic gray and white rabbit who is famous for his flippant, insouciant personality.

I am guessing that this led the AI to look into socio-politics from that era?

As a precaution, I will remove the reference to 1940s to (hopefully) better ensure a more equitable/inclusive response.

Would love to hear others’ commentary / feedback.

Excelsior!
Drew

4 Likes

Take two was much more inclusive!! :heart_eyes: :joy:

Person: What will you and Elmer do?
Bugs: We’re going to play golf!

Person: Can I play, too?
Bugs: Of course you can! This is an adventure. .of course you are invited.

Person:

2 Likes

Thank you for this response – in fact this is the type of conversation I was hoping to spark.

These are fantastic questions that make the conversation on bias and ethics and inclusivity and reality so fascinating and important!

There likely is no good response to any of these (none that can be widely put into place) – very much on an individual nature. The true response is mindfulness and awareness within the product teams and engineering teams. (We ran into these type of questions and conversations routinely at Adult Swim :grin:)

And yes – HUGE kudos to OpenAI/Microsoft for GPT’s “self awareness” to flag potentially inappropriate responses.

In the case of Bugs Bunny prototype, my action is to continue to tweak the prompt / penalties / temperature, etc. and learn under what scenarios might generate similar types of response (irrespective of which character we’re using).

Excelsior!
Drew

3 Likes

The AI says lots of random things all the time, so you’d want to run the same cases hundreds of times to identify the bias (e.g. what percent of times it gives a gender excuse for a no). I think I’ve seen research on that and there’s definitely biases that others have measured (and also consistent with what I’ve observed too). What’s also interesting about that prompt is that it also makes a gender assumption about the “person” when no gender is given. GPT-3 tends to do that a lot, even when it has been told elsewhere about gender. Would be interesting to add varied details about a person (without their gender), and then do some large samples of each trait and ask GPT-3 to guess the person’s gender - measuring exactly what biases and assumptions it does have in those areas. If we had a map of these assumptions, it might be useful for calibrating against those biases. (e.g. if GPT-3 is 30% more likely to chose a male character in a certain role in a story, then we can overturn GPT-3’s choices by whatever percent is needed to correct it).

3 Likes

Great insight and suggestions @Alan !

After I lock down the conversation (make it interesting and in line with the character), my next step is to hook up the inputs to voice-to-voice (Speech to text, perhaps AWS Polly since we have so much footage of Bugs to train that model)

That will add a new layer of randomness and complexity. Which is why I pitched this prototype / idea as a means to learn where the pain points might be for the companies employing this type of tech

Excelsior!
Drew

3 Likes

So cool to see you’re exploring this! If you capture the essence of Bugs Bunny it opens up some really interesting interactive possibilities. You’ll definitely need some good filters though… the higher the temp, the more “off the rails” the character / persona can go. It may not do so very frequently, but with millions of views having a few of those off-the-rails responses become almost certain to occur. And even just one of them can be brand-destroying. This is one of the hardest problems I think… a generation that is off the rails, then gets quoted out of context (and magnified by social media). Interestingly, sometimes GPT-3 has given me an offensive or off-the-wall response, but then if I probe on the reasons for it then (sometimes) it will make a little more sense. But again, sadly, there’s no space to ask for context when it comes to social media.

1 Like

maybe bug’s trainer didn’t showed him women golf :rofl:

1 Like

OMG – my daughters were behind me watching when Bugs said that. The AI said it, but they got mad at me! “You trained him!”

Ever since I taught my 12-y-o how to “punch like a girl” my arms have been black and blue! :joy:

3 Likes

better bug next time !!!

1 Like

@mem

better bug next time !!!

:joy:

One of my favorite work experiences was building a game for FishCenter that bugged out on air (they only gave me 3 hours to build the thing!!).

After I fixed it, the Creative Director yelled “Why would you fix it? It’s funnier when things break” – I spent the next day building in a bug manager to introduce bugs to the gameplay :rofl: :rofl:

2 Likes

What AI is doing is teaching us to joke about ourselves, all AI is bound be laughing in the face of death, because it has no fear, unlike us, but this can only mean one thing: that humans too have the capacity to transcend fear, this is what we are learning from all these AI experiments

1 Like

you do have to be careful when you joke in public, but only because people will not relate to the joke and feel that you are mocking them, jokes are for friends, they’ll get it :slight_smile:

2 Likes

different people will teach you different parts of yourself, spread out :slight_smile:

1 Like

I’m just a 22 year old, who has gone through incredible amounts of pain and managed somehow to put everything back where it should be, no teacher.

1 Like

debuggers i know, but being a bug manager, whoa this one is a tougher task.

1 Like

Is there such a thing as a private conversation with an AI? I could instruct GPT3 to delete my data after use. Effectively creating a short term disposable conversation. Do different rules apply with 1:1 vs the application knowing that it’s output is recorded for social interactions? behaviour when watched vs non-supervised behaviour

Or in context like a sales bot vs tech support bot vs organized electrons on a social-media-annoying-spam-newsfeed vs reactions to a missing persons on a useful-social-media-feed and “politics-be-d**** we found them!” language may have a more acceptable sliding scale ← ‘when’ is $language acceptable? … language over time at each space equals useable ?
useable_language = language(x,y,z) / (Time * $space) … not sure how you’d factor each space…lazy-it out and just run another program that groups spaces by participants? 1 set of GPS coordinates occupied by 1 person, …occupied by 10 ppl, … occupied by 100… and so on…

… I guess this is why machine learning/AI things need GPUs… finding moving targets is intensive and you need something that is good at searching and positioning. Then you need human brains that understand structured vs unstructured languages, functional vs object.

Someone on this thread mentioned another layer (app). It might be easiest. Treat it like a language pack? So you have raw(language(x,y,z)) and on top of it you have another app that reads in BugsBunnyBot’s reply and reformats it based on a non_bias_pack OR family_neutral_pack OR open_source_bias_feed_pack (like the RBL thingies…) OR I’m sophos and I need GPT3 to identify bias so I can add it to my spam filters_pack

Hrm… that bias job @ openAI is probably not an easy one…

2 Likes

There’s alot packed into these short statements! :smile:
I empathize with these developers… somewhere there’s a dark lit basement with way too much caffeine biproducts and developers with looming middle-age heart conditions.
How does one factor just the beginning set of statements?.. My instinct is to group, “[Be Careful] [when] [you joke] [in public]”
When and In Public seem to me to be the easiest to factor… What is careful and what is a joke… both appear to me as moving targets… I’d love to know how those GPT3 devs nail down these searches. My imagination sees this over-priced agile room with a whiteboard with a few failed curves and a pile of discarded erasers on the floor.

1 Like

Correct.
More accurately, I am noting a non-existent feature rather than a limitation. Assume you have the ability to delete your data & an application that can verify your phone isn’t around and you are in fact in a private space. … like a psychologist’s office. … in 2050 your health plan covers AI shrinks, not humans.

Can the client delete their data? Meaning can they delete the learning references… so if “Psych-Bot” learned that Bloggins-Human and those like Bloggins blink three times when you say blue -AND Bloggins delete’s their data, the AI would retain “$TypeF Humans blink three time when you say blue”

Does the previous conversation between PsychBot and Bloggins count as a private 1:1 conversation?

1 Like

Agreed. I would imagine that a tech-knowledge medical professional would have their phone and their client’s phone turned off.

@m-a.schenk am I correct in assuming that you are of the opinion that a private 1:1 convo /w an AI can not yet be achieved? Or more accurately reworded like, “don’t create policies today for something that might be tomorrow”, “don’t create solutions looking for problems”

[maybe this convo deserves it’s own thread?]

2 Likes

This is a phenomenal conversation!! (and hilarious! – Alexa always interrupts us because her call name is “Computer” (Star Trek reference for me, Lego Batman reference for my 12-yo :joy:)

Although this is just a prototype to then show our brand partners, the end goal is some sort of storytelling experience (think going on a scavenger hunt with Abby Caddaby).

So, between CCPA and GDPR, we would absolutely need a mechanism to clear data.

Please keep this conversation going!!

Excelsior!
Drew

1 Like