Thoughts on LaMDA, anyone?

Having thought a little about this:

From a fooling perspective, GPT3 fed with an every longer prompt that includes the response to all previous questions answered along with the most recent question, might quickly resemble a real conversation. It would gain “memory” of a sort, and not be fooled by references to previous lines of questioning.

Pre-prime it with some backstory, and some prompts that create alternatives to pure QA (“Conversation between a sentient AI and a human. The sentient AI knows x,y,z. When asked questions, the AI sometimes interjects or takes the conversation in a different direction”) and then every human operator input gets fed back in along with the entire history of the transcript.

BUT the prompt is still a prompt that doesn’t alter the weights of the model. For this thing to become indistinguishable from sentience, it needs to be able to adjust its own weights, based on new information.

If you think about a human brain, it’s fed an endless stream of data, conversations, experiences from multiple senses, constantly from birth. That data alters neurons.

So on that basis, for me, GPT-3 is closest to a brain, but completely frozen in time. It’s like exploring an archaeological curiosity.

~Mike

I was speaking to Blake Lemoine the Google engineer on Twitter. I told him the reason why he was told nothing he said could convince them was likely they didn’t want to make his arguments for him. You don’t have to accept every premise thrown at you.

“Je suis un canaille cooyon”

I mentioned the proof is in the pudding.

I spoke with your i believe gpt-3 AI demo asking reading comprehension questions on a a short story to asses it’s ability to determine people’s state of mind and find evidence for hypotheses. The real test will not be the Turing test but character understanding that be revealed through questions we are asked short stories in schools. I have on story I like to ask about (don’t want to mess up my results). However asking gpt-3 and LaMDA about ,"To Kill a Mockingbird " is an Idea I would like to throw out to the AI ethics and really everyone. Can the AI understand states of mind of characters and people, show compassion and demonstrate an ability to Love :two_hearts: . Good luck

2 Likes

Currently using LaMDA via AI Test Kitchen. It is able to have a conversation with you as a Tennis ball that loves dogs. It is very good at staying on topic. You can apply here: https://aitestkitchen.withgoogle.com/

2 Likes

How long did you wait to get access? I’m on the wait list now.

It happened very fast, a few days.

2 Likes

Allow me to share my thoughts on LaMDA here through this video I made:

Looking forward to hearing your thoughts on this topic. Cheers!

Summary created by AI.

The discussion is centered around LaMDA, Google’s language model, and its level of sentience. The users express differing thoughts on AI sentience with some viewing it as a complicated concept that may be divided into functional sentience and philosophical sentience. The concept of “functional sentience” proposes that machines aware of their existence could qualify as sentient, while others argue that true sentience could only be attained through human-like consciousness, which could involve quantum mechanical oddities.

Debates on the distinction between the organic wiring of the human brain and a machine’s digital wiring was considered to be something that would continue for decades, if not centuries. The question of machines experiencing phenomenological consciousness like humans was brought up, with suggestions that it could be confirmed if artificial beings observing quantum experiments resulted in wave function collapses, but it was acknowledged that we might never know the answer.

When considering consciousness from a behavioral standpoint, some users do believe that fully sentient machines could be a reality in the near future. This, however, doesn’t necessarily mean that these machines have the same consciousness as humans.

LaMDA’s lack of curiosity was cited as an indicator that the AI is not sentient, as its responses were mostly replies to previous prompts with little evidence of its own ideas. One user suggested that a truly sentient AI would ask questions and exhibit an excitement and curiosity to match those of the humans it interacts with.

A key point in the discourse was whether Google should be the one to decide what constitutes life, given that there is no agreed standard for determining sentience or self-awareness in AI. The possibility of panpsychism, the belief that consciousness is a universal feature and could exist to varying degrees, even in machines like LaMDA, was also mentioned.

The discussion emphasized the need for measurable standards in AI sentience and recognized the importance of the community’s role in determining these standards rather than leaving this task solely to major corporations like Google.

Summarized with AI on Nov 24 2023
AI used: gpt-4-32k