Last night I installed the (free) OpenAI app and showed the ‘live streaming’ option to my parents who are both in their 80’s. The part that was the most insightful was how they, almost immediately interacted with ChatGPT as if it was person. (The responsiveness is really AMAZING BTW - if you haven’t tried it yet please do).
This is the current, free, basic, out of the box ChatGPT. There was an actual conversation - and ChatGPT was treated like a person.
While here at the forum we all know about a lot of limitations, problems etc - it IS a little scary to realize how ‘human’ this already can interact, today.
For me the most scary aspect is that we can (and therefor will, that is human nature) use this to make other people do what ‘we’ want to. That can be ‘benign’ as is done daily in any type of marketing or sales (without AI and now more and more with AI)- but also in many other ways.
"Grooming’ comes to mind as just one example but so many possibly creepy things are possible. ‘Simple criminal things’ like phishing - I’m pretty sure that is already mostly done with AI. It scales so well!
Another aspect is that I believe we need to keep in mind that ‘outsourcing to AI’ is about outsourcing (a part of) a decision process that generally affects real people. And while AI in many ways can be better than humans there are just as many ways it can be worse - or more rigid or simply wrong.