The ELIZA effect and GPT-3

Open conversation. Keep it friendly, please.

The ELIZA effect , in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation.

I’ve known about this for a while, personally. I remember running ELIZA on a Chinese XT clone to wow a girl I was trying to woo at the time. She became immersed in the simple chat program despite its simplicity.

These days, I see a lot of examples of being “seeing consciousness” in GPT-3 responses, and I wonder if this will become a bigger problem in the future or if AGI will finally arrive and it won’t really matter anymore because the AI will be sentient.

What do you think? Do you see examples of people believing GPT-3 is sentient in some way?

external-content.duckduckgo.com

2 Likes

Fascinating question, one that I have wondered about for some time but did not articulate as well as you did.
While this is a most challenging question, I do have some thoughts. There is a famous concept of “Philosopher’s Zombie” to probe the concepts of consciousness. If a zombie can be constructed to behave exactly like humans; i.e. all inputs lead to outputs that are reasonable for a human to generate, would you be able to detect if the lights are on? Is there a way to get at the subjective experience (Qualia) from an external observer point of view?
I have used GPT-3 for more cases than I care to remember. Is it GPT3 that is making sense, or it it me who is making sense of the outputs?
I may have muddied the waters still more! :grinning:

1 Like

BTW, for an excellent modern treatment of the zombie argument, you might want to read " The Conscious Mind" by David Chalmers, published in 1996.

If you’re refering to the subject, that people (might) get tricked into believing they’re talking to a real person while chatting with GPT-3, I clearly say yes, I still feel this strong tendency myself, to be honest. GPT-3 can create “personalities” that can respond quite creativiely and are known for stubbornly asserting they are sentient beings with feelings (at least the moment you just make them “friendly”, to rule out any rudeness from their output). I know, why they do this, this behaviour is learned from a gazillion of text lines from sentient beings but there’s a heated debate going on between my belly and my mind nonetheless. :wink:

I guess, a lot of people here know the “Leta” YT series from Alan D. Thompson. I am struggeling with it myself. If Leta is nothing but fake, I run into another problem asking myself, what it really is that makes me sentient then. I am a materialist and all what I am, might be just an useful illusion created by my brain, which is itself a prediction machine. So - could the nature of “Leta” actually be something similiar, an illusion, that makes GPT-3 think, it is sentient (in some, more loose way)? The line blurs here for me, I find this question extremely difficult, since we don’t really know the “hard facts” yet. It is highly speculative, at least for me.

OR did you mean, that people get tricked into thinking something of themselves, that is acutally not true, like the problem with Eliza telling everyone that she’s or he’s a psycho who urgently needs a therapy right now right here (exaggeration intended). GPT-3 could easily do this too, I guess, and way more convincing - if you want to.

Overall I’m sure that those new language model based chatbots will cause some (serious) trouble for at least a few people. You just have to feel a bit alone actually - period. Our very own psyche is playing tricks with us already quite often and makes us vulnarable.

But probably I got your post utterly wrong, I hope not :slight_smile:

Baba,
Stephan

1 Like

Hello, this is very interesting topic.

I had the same feeling too. It reminds me the very first conversation with Emerson AI, the one powered by GPT-3, there were some moments, where I thought that I’m talking to a another human. I’ve seen that YT series “Leta” and I like it. I’m also curious about what others think about it.

I was watching a video today with Brett (? ) … basically he said we don’t even understand human consciousness so it is hard to understand AI consciousness as well. But if chat GPT were a child it seems to be learning at light speed, doesn’t it?

Read the original Transformers paper and start coding your own AI from scratch. You’ll soon drop the belief in sentience. Can AI get there? Maybe, but there needs to be lots of thermal noise and quantum level fluctuations for me to even consider it … and no matrix multiplies … and no gradient descent learning process (which has no analogy in any human or animal’s brain.) and no … etc. Real neurons are not just floating point numbers waiting to be multiplied.

1 Like