One of my favorite work experiences was building a game for FishCenter that bugged out on air (they only gave me 3 hours to build the thing!!).
After I fixed it, the Creative Director yelled “Why would you fix it? It’s funnier when things break” – I spent the next day building in a bug manager to introduce bugs to the gameplay
What AI is doing is teaching us to joke about ourselves, all AI is bound be laughing in the face of death, because it has no fear, unlike us, but this can only mean one thing: that humans too have the capacity to transcend fear, this is what we are learning from all these AI experiments
you do have to be careful when you joke in public, but only because people will not relate to the joke and feel that you are mocking them, jokes are for friends, they’ll get it
Is there such a thing as a private conversation with an AI? I could instruct GPT3 to delete my data after use. Effectively creating a short term disposable conversation. Do different rules apply with 1:1 vs the application knowing that it’s output is recorded for social interactions? behaviour when watched vs non-supervised behaviour
Or in context like a sales bot vs tech support bot vs organized electrons on a social-media-annoying-spam-newsfeed vs reactions to a missing persons on a useful-social-media-feed and “politics-be-d**** we found them!” language may have a more acceptable sliding scale ← ‘when’ is $language acceptable? … language over time at each space equals useable ?
useable_language = language(x,y,z) / (Time * $space) … not sure how you’d factor each space…lazy-it out and just run another program that groups spaces by participants? 1 set of GPS coordinates occupied by 1 person, …occupied by 10 ppl, … occupied by 100… and so on…
… I guess this is why machine learning/AI things need GPUs… finding moving targets is intensive and you need something that is good at searching and positioning. Then you need human brains that understand structured vs unstructured languages, functional vs object.
Someone on this thread mentioned another layer (app). It might be easiest. Treat it like a language pack? So you have raw(language(x,y,z)) and on top of it you have another app that reads in BugsBunnyBot’s reply and reformats it based on a non_bias_pack OR family_neutral_pack OR open_source_bias_feed_pack (like the RBL thingies…) OR I’m sophos and I need GPT3 to identify bias so I can add it to my spam filters_pack
Hrm… that bias job @ openAI is probably not an easy one…
There’s alot packed into these short statements!
I empathize with these developers… somewhere there’s a dark lit basement with way too much caffeine biproducts and developers with looming middle-age heart conditions.
How does one factor just the beginning set of statements?.. My instinct is to group, “[Be Careful] [when] [you joke] [in public]”
When and In Public seem to me to be the easiest to factor… What is careful and what is a joke… both appear to me as moving targets… I’d love to know how those GPT3 devs nail down these searches. My imagination sees this over-priced agile room with a whiteboard with a few failed curves and a pile of discarded erasers on the floor.
Correct.
More accurately, I am noting a non-existent feature rather than a limitation. Assume you have the ability to delete your data & an application that can verify your phone isn’t around and you are in fact in a private space. … like a psychologist’s office. … in 2050 your health plan covers AI shrinks, not humans.
Can the client delete their data? Meaning can they delete the learning references… so if “Psych-Bot” learned that Bloggins-Human and those like Bloggins blink three times when you say blue -AND Bloggins delete’s their data, the AI would retain “$TypeF Humans blink three time when you say blue”
Does the previous conversation between PsychBot and Bloggins count as a private 1:1 conversation?
Agreed. I would imagine that a tech-knowledge medical professional would have their phone and their client’s phone turned off.
@m-a.schenk am I correct in assuming that you are of the opinion that a private 1:1 convo /w an AI can not yet be achieved? Or more accurately reworded like, “don’t create policies today for something that might be tomorrow”, “don’t create solutions looking for problems”
This is a phenomenal conversation!! (and hilarious! – Alexa always interrupts us because her call name is “Computer” (Star Trek reference for me, Lego Batman reference for my 12-yo )
Although this is just a prototype to then show our brand partners, the end goal is some sort of storytelling experience (think going on a scavenger hunt with Abby Caddaby).
So, between CCPA and GDPR, we would absolutely need a mechanism to clear data.
Hey, I’m not sure what you’re talking about but if this helps: carefulness comes from being vary, seeing, a conscious action, gentle. A joke is something that is usually deadly, it has the ability to destroy egos, borders through this deadliness it forms strong bonds between individuals. Too much of it can kill, timing is everything. Not so much the words said, one can have no clue that he has even made a joke, yet it’s hilarious.
For a conscious joke to happen it requires seeing through a situation, seeing/noticing something that others didn’t (they could have noticed it unconsciously) and then pointing out the obvious, the wierd, the gross… A joke will evolve individuals awareness in various ways. It’s an essential tool for growth, laughing at the face of death.
Great comment and observation (and reference!) @toontalk! But as I work for WarnerMedia, I’m working with our available IP. Looking forward to the anarchy and jackassery of a Rick Bot (Wubba lubba dub dub!)
One of my future prototype ideas includes Wonder Woman as well as AdventureTime (my dept got access to all the Looney Toons assets for another project, so it’s currently… Wabbit Season )
Apropos AI and Warner, in 1983/84 I was consulting for Atari Research (run by Alan Kay) which was owned by Warner. We had plans to make a Daffy Duck intelligent chatbot - the reasoning being that all the flaws in our chatbot would seem natural to a character like Daffy. Sadly Atari went from very profitable to hugely unprofitable in 1984 so everything was abandoned.
that’s very sweet, do you have kids yourself? If you do, it would be very useful to involve them in this project (if not, then it’s not hard to find one and befriend one haha lol :D)