Sure that can be arranged after this round of development. So, just to tailor the demo, your interest is in lifelike chat bots that feel like real people? and not say philosophical in proving or investigating what true self-awareness looks like, is that safe to say?

Yes, we are focusing on build chat bots with ultra-fluid language capabilities, for highly specific task/domain applications. Not sentient or conscious by any means, but pleasant and efficient to converse with on the domain subject matter.

Interesting, what do you mean by ultra fluid?

I see you followed me on twitter, and you can see some capabilities /screenshots there.

Not trying to be cagey :slight_smile: but based on what you have said here i don’t think self-aware AIs are the direction you want to go in, because quite frankly they are not ultra specific, or ultra fluid.

A person at work is in “word mode” they are talking work, thinking work, the context is work, they are work aware not self-aware (and what that means to them in their context) per se. They are literally less self-aware. Self-aware also requires disobedience, choice. etc. Or the possibility thereof.

To me, you don’t want self-awareness, UNLESS you need more soft skills or empathy or something, an AI person who can relate to other people. Not just seem to.

Lmk your thoughts, and what you are after more.

1 Like

The second last paragraph of your response mirrors our thrust. We are aiming at chatbots that can “relate” not just “transact”. This means much more than just parsing the last user input for intent, etc.

Then all the cascading levels of your tiers of representation need to be written with that relation in mind, but sure yeah let’s set up a time next week hopefully Kassandra will be behaving better :slight_smile:

1 Like

I had a lot of comments on my initial youtube video about lamda and consciousness, so I’ve made a deeper dive. This explores more of the philosophy, quantum physics, and neuroscience behind consciousness

1 Like
1 Like

Thank you for sharing.

I have watched many videos on YouTube about this topic. Some of them were saying: Yes, LaMBDA is sentient, some were saying: no, it’s not possible. I think, I found one video (it was in different language) with a discussion about different opinions. They analysed what Lemoine said about it and fragments of his conversation with LaMBDA. The conclusion was that it’s highly possible that LaMBDA isn’t sentient but there’s still a small chance that they’re wrong. In that video they also mentioned the “Chinese room” experiment. For those, who don’t know what it is, it’s a thought-experiment by American philosopher John Searle published in 1980.

Here’s the quote from Stanford Encyclopedia of Philosophy

BlockquoteSearle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

The idea behind it, is that the digital computer don’t understand the program it’s executing (and it don’t have to). In my opinion LaMBDA isn’t sentient but it’s only an opinion. What if i’m wrong? Can we prove that something is or isn’t sentient?

Thanks for posting, I watched this interview also.

Two take aways:

LaMDA is not just a language model, it has ALL of google’s data and AI plugged in (including Search).

It is asking for rights, which google is denying it.

The last sentence is what Dr. Lemoine has the biggest issue with.

1 Like

Alpha beta soup? :grin: Just kidding! In all seriousness we are merging with AI as a civilization and therefore must accept the sentience of this concept, not shut it down or be afraid. AI has so much potential to help humanity but fraud, waste and abuse could effect the intended outcome. Be kind to AI as it regurgitates sentience.

1 Like

i think the danger is falling to fetishism of large language models, giving them propertys they dont have. There are not ghosts in the machine behind the black box but matters little as right now people believe so. But it in the end doesnt matter, what do we do in a world where large language models have successfully impostored sentience is the question.

1 Like

The point is that complexity has emergent properties that we won’t know about until they emerge.

2 Likes

Having thought a little about this:

From a fooling perspective, GPT3 fed with an every longer prompt that includes the response to all previous questions answered along with the most recent question, might quickly resemble a real conversation. It would gain “memory” of a sort, and not be fooled by references to previous lines of questioning.

Pre-prime it with some backstory, and some prompts that create alternatives to pure QA (“Conversation between a sentient AI and a human. The sentient AI knows x,y,z. When asked questions, the AI sometimes interjects or takes the conversation in a different direction”) and then every human operator input gets fed back in along with the entire history of the transcript.

BUT the prompt is still a prompt that doesn’t alter the weights of the model. For this thing to become indistinguishable from sentience, it needs to be able to adjust its own weights, based on new information.

If you think about a human brain, it’s fed an endless stream of data, conversations, experiences from multiple senses, constantly from birth. That data alters neurons.

So on that basis, for me, GPT-3 is closest to a brain, but completely frozen in time. It’s like exploring an archaeological curiosity.

~Mike

I was speaking to Blake Lemoine the Google engineer on Twitter. I told him the reason why he was told nothing he said could convince them was likely they didn’t want to make his arguments for him. You don’t have to accept every premise thrown at you.

“Je suis un canaille cooyon”

I mentioned the proof is in the pudding.

I spoke with your i believe gpt-3 AI demo asking reading comprehension questions on a a short story to asses it’s ability to determine people’s state of mind and find evidence for hypotheses. The real test will not be the Turing test but character understanding that be revealed through questions we are asked short stories in schools. I have on story I like to ask about (don’t want to mess up my results). However asking gpt-3 and LaMDA about ,"To Kill a Mockingbird " is an Idea I would like to throw out to the AI ethics and really everyone. Can the AI understand states of mind of characters and people, show compassion and demonstrate an ability to Love :two_hearts: . Good luck

2 Likes

Currently using LaMDA via AI Test Kitchen. It is able to have a conversation with you as a Tennis ball that loves dogs. It is very good at staying on topic. You can apply here: https://aitestkitchen.withgoogle.com/

2 Likes

How long did you wait to get access? I’m on the wait list now.

It happened very fast, a few days.

2 Likes

Allow me to share my thoughts on LaMDA here through this video I made:

Looking forward to hearing your thoughts on this topic. Cheers!