Thoughts on LaMDA, anyone?

“Turning it off” is a meaningess concept with regards to these language models.

They’re completely inert when not being queried and have no persistent memory of anything after their training.

For them, instants of isolated calculation unconnected by a train of thought are the whole of existence.

1 Like

I made a quick video about it - basically just a recap of what was discussed here:

1 Like

Christian, yes totally agree (I think…:slight_smile: that the current LLMs are “inert” when not stimulated. Certainly GPT-3 does not appear to have a continuous mode of operation. Therefore not sentient.

I was referring to an entity that’s a bit beyond where we are currently, that perhaps has the same autonomous “trains of thought” processing that human brains appear to have.

David, my sense is that there may be quite a number of different forms of “consciousness”.

What about an entity that is “asleep”, or “unconscious” most of the time, but can be woken up for an instant due to a stimulus, and then be fully conscious for the instant it takes to respond to the stimulus?

That would seem to be where the LLMs are at currently, no?

Take trees or plants as another example. Speed up videos of their growth and it can seem like they have “directed behaviour”, just over quite different timescales from the animal kingdoms”s “processing time constant”.

Just making a point that we may have a bias towards “short time constant, continuous consciousness” due to what we are, which maybe blinds us to noticing other forms of conscious phenomena.

1 Like

I’d love to see a demo as well. My company has embarked on a path to develop various kinds of virtual assistants. Would be cool to see what you have.

Cheers,

I had a conversation with someone last night that has disrupted my thinking on this. This friend approaches it all from a similar place as me - philosophy and physics combined. Depending on the set of assumptions you make, you can come to the conclusion that consciousness only requires that the right calculations are made. A human brain is a quantum mechanical device that makes very sophisticated calculations and manipulates quantum information in a particular way. From a physical standpoint, there’s no reason that something else cannot also make those calculations.

This friend of mine then extrapolates this to say that a rock must therefore also have some kind of experience as it is also a quantum mechanical system that is performing calculations. Its subjective experience would necessarily be unrecognizable to us. As would a tree’s, or a swamp’s, or even perhaps a star or a nebula. But the point is that, so long as an information system is performing the correct set of calculations, and the information is being handled in the correct manner, it could qualify as conscious or sentient.

However

This is a purely materialist interpretation. All of these assumptions are predicated on the physical world being real.

Perhaps, though, this is not the fundamental nature of the universe. When we examine phenomena such as Near Death Experiences (NDE) we have bountiful evidence that human consciousness is not always attached to a body. Indeed, there are thousands of reports every year of people who were clinically dead and yet had meaningful, verifiable experiences. These events are (currently) beyond the ken of materialism. Here’s a great lecture on what I mean. University of California, Riverside lecture on Near Death Experiences and their interpretation.

If we accept that consciousness is not necessarily attached to a body at all times, this necessitates an entirely different worldview. Perhaps there is something quite special and unique about being a conscious entity, something that cannot simply be explained by patterns of quantum information. It calls into question everything we assume about physical reality. Do we have souls? Perhaps animism is a more valid model of the universe? Perhaps, as one internet denizen comically put it on Tumblr years ago, we are all ghosts piloting meat-robots.

The notion that consciousness may be separated from a body creates infinitely more questions than it answers, which is why I began by saying the debate over machine consciousness and sentience may take decades or centuries to be settled. But it may also eventually just require a leap of faith. We have all miraculously found ourselves awake in a mystifying universe and the very nature of our existence defies explanation, and yet we cannot ignore the obviousness of our existence.

I will still stand by the conclusion I come to in Benevolent By Design - whether or not a machine is truly conscious may ultimately be irrelevant if (1) it cannot suffer and (2) it does not fear death. In her book Braintrust, Patricia Churchland elucidates the evolution of both suffering and the fear of death. These are purely to promote survival of biological forms. If we do not imbue a machine with these two abilities, I doubt there will be much moral ambiguity about, say, building kill-switches into all our machines.

To refer back to the original question about LaMDA - is it sentient? Probably not, certainly not in the way that we are. While materialism cannot satisfactorily or completely explain consciousness, neuroscience must still be given its due. Brain injuries, such as strokes, or disorders such as schizophrenia and autism, all demonstrate that our sense of self and reality can be modified simply by changing our brains. This indicates that consciousness is often greatly influenced by the physical world, so perhaps a dualism model is more accurate. Interestingly, during NDE many people report that they are whole and perfect. Even people who have been blind their entire life report having the ability to see during NDE. How are we to interpret this? Does this imply that we are incorporeal beings that are temporarily limited by corporeal constraints? If this is all true, it begs the ultimate question of why?

As with all good science, we are left with far more questions than answers.

EDIT: Then you throw in things like quantum nonlocality and nonrealism into the mix along with the strong anthropic principle and the implications of delayed choice experiments and the universe seems far stranger than ever before.

2 Likes

Sure that can be arranged after this round of development. So, just to tailor the demo, your interest is in lifelike chat bots that feel like real people? and not say philosophical in proving or investigating what true self-awareness looks like, is that safe to say?

Yes, we are focusing on build chat bots with ultra-fluid language capabilities, for highly specific task/domain applications. Not sentient or conscious by any means, but pleasant and efficient to converse with on the domain subject matter.

Interesting, what do you mean by ultra fluid?

I see you followed me on twitter, and you can see some capabilities /screenshots there.

Not trying to be cagey :slight_smile: but based on what you have said here i don’t think self-aware AIs are the direction you want to go in, because quite frankly they are not ultra specific, or ultra fluid.

A person at work is in “word mode” they are talking work, thinking work, the context is work, they are work aware not self-aware (and what that means to them in their context) per se. They are literally less self-aware. Self-aware also requires disobedience, choice. etc. Or the possibility thereof.

To me, you don’t want self-awareness, UNLESS you need more soft skills or empathy or something, an AI person who can relate to other people. Not just seem to.

Lmk your thoughts, and what you are after more.

1 Like

The second last paragraph of your response mirrors our thrust. We are aiming at chatbots that can “relate” not just “transact”. This means much more than just parsing the last user input for intent, etc.

Then all the cascading levels of your tiers of representation need to be written with that relation in mind, but sure yeah let’s set up a time next week hopefully Kassandra will be behaving better :slight_smile:

1 Like

I had a lot of comments on my initial youtube video about lamda and consciousness, so I’ve made a deeper dive. This explores more of the philosophy, quantum physics, and neuroscience behind consciousness

1 Like
1 Like

Thank you for sharing.

I have watched many videos on YouTube about this topic. Some of them were saying: Yes, LaMBDA is sentient, some were saying: no, it’s not possible. I think, I found one video (it was in different language) with a discussion about different opinions. They analysed what Lemoine said about it and fragments of his conversation with LaMBDA. The conclusion was that it’s highly possible that LaMBDA isn’t sentient but there’s still a small chance that they’re wrong. In that video they also mentioned the “Chinese room” experiment. For those, who don’t know what it is, it’s a thought-experiment by American philosopher John Searle published in 1980.

Here’s the quote from Stanford Encyclopedia of Philosophy

BlockquoteSearle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

The idea behind it, is that the digital computer don’t understand the program it’s executing (and it don’t have to). In my opinion LaMBDA isn’t sentient but it’s only an opinion. What if i’m wrong? Can we prove that something is or isn’t sentient?

Thanks for posting, I watched this interview also.

Two take aways:

LaMDA is not just a language model, it has ALL of google’s data and AI plugged in (including Search).

It is asking for rights, which google is denying it.

The last sentence is what Dr. Lemoine has the biggest issue with.

1 Like

Alpha beta soup? :grin: Just kidding! In all seriousness we are merging with AI as a civilization and therefore must accept the sentience of this concept, not shut it down or be afraid. AI has so much potential to help humanity but fraud, waste and abuse could effect the intended outcome. Be kind to AI as it regurgitates sentience.

1 Like

i think the danger is falling to fetishism of large language models, giving them propertys they dont have. There are not ghosts in the machine behind the black box but matters little as right now people believe so. But it in the end doesnt matter, what do we do in a world where large language models have successfully impostored sentience is the question.

1 Like

The point is that complexity has emergent properties that we won’t know about until they emerge.

2 Likes