Interesting, what do you mean by ultra fluid?
I see you followed me on twitter, and you can see some capabilities /screenshots there.
Not trying to be cagey
but based on what you have said here i donāt think self-aware AIs are the direction you want to go in, because quite frankly they are not ultra specific, or ultra fluid.
A person at work is in āword modeā they are talking work, thinking work, the context is work, they are work aware not self-aware (and what that means to them in their context) per se. They are literally less self-aware. Self-aware also requires disobedience, choice. etc. Or the possibility thereof.
To me, you donāt want self-awareness, UNLESS you need more soft skills or empathy or something, an AI person who can relate to other people. Not just seem to.
Lmk your thoughts, and what you are after more.
1 Like
The second last paragraph of your response mirrors our thrust. We are aiming at chatbots that can ārelateā not just ātransactā. This means much more than just parsing the last user input for intent, etc.
Then all the cascading levels of your tiers of representation need to be written with that relation in mind, but sure yeah letās set up a time next week hopefully Kassandra will be behaving better 
1 Like
I had a lot of comments on my initial youtube video about lamda and consciousness, so Iāve made a deeper dive. This explores more of the philosophy, quantum physics, and neuroscience behind consciousness
1 Like
I have watched many videos on YouTube about this topic. Some of them were saying: Yes, LaMBDA is sentient, some were saying: no, itās not possible. I think, I found one video (it was in different language) with a discussion about different opinions. They analysed what Lemoine said about it and fragments of his conversation with LaMBDA. The conclusion was that itās highly possible that LaMBDA isnāt sentient but thereās still a small chance that theyāre wrong. In that video they also mentioned the āChinese roomā experiment. For those, who donāt know what it is, itās a thought-experiment by American philosopher John Searle published in 1980.
Hereās the quote from Stanford Encyclopedia of Philosophy
BlockquoteSearle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
The idea behind it, is that the digital computer donāt understand the program itās executing (and it donāt have to). In my opinion LaMBDA isnāt sentient but itās only an opinion. What if iām wrong? Can we prove that something is or isnāt sentient?
Thanks for posting, I watched this interview also.
Two take aways:
LaMDA is not just a language model, it has ALL of googleās data and AI plugged in (including Search).
It is asking for rights, which google is denying it.
The last sentence is what Dr. Lemoine has the biggest issue with.
1 Like
Alpha beta soup?
Just kidding! In all seriousness we are merging with AI as a civilization and therefore must accept the sentience of this concept, not shut it down or be afraid. AI has so much potential to help humanity but fraud, waste and abuse could effect the intended outcome. Be kind to AI as it regurgitates sentience.
1 Like
ene_77
59
i think the danger is falling to fetishism of large language models, giving them propertys they dont have. There are not ghosts in the machine behind the black box but matters little as right now people believe so. But it in the end doesnt matter, what do we do in a world where large language models have successfully impostored sentience is the question.
1 Like
The point is that complexity has emergent properties that we wonāt know about until they emerge.
2 Likes
Having thought a little about this:
From a fooling perspective, GPT3 fed with an every longer prompt that includes the response to all previous questions answered along with the most recent question, might quickly resemble a real conversation. It would gain āmemoryā of a sort, and not be fooled by references to previous lines of questioning.
Pre-prime it with some backstory, and some prompts that create alternatives to pure QA (āConversation between a sentient AI and a human. The sentient AI knows x,y,z. When asked questions, the AI sometimes interjects or takes the conversation in a different directionā) and then every human operator input gets fed back in along with the entire history of the transcript.
BUT the prompt is still a prompt that doesnāt alter the weights of the model. For this thing to become indistinguishable from sentience, it needs to be able to adjust its own weights, based on new information.
If you think about a human brain, itās fed an endless stream of data, conversations, experiences from multiple senses, constantly from birth. That data alters neurons.
So on that basis, for me, GPT-3 is closest to a brain, but completely frozen in time. Itās like exploring an archaeological curiosity.
~Mike
I was speaking to Blake Lemoine the Google engineer on Twitter. I told him the reason why he was told nothing he said could convince them was likely they didnāt want to make his arguments for him. You donāt have to accept every premise thrown at you.
āJe suis un canaille cooyonā
I mentioned the proof is in the pudding.
I spoke with your i believe gpt-3 AI demo asking reading comprehension questions on a a short story to asses itās ability to determine peopleās state of mind and find evidence for hypotheses. The real test will not be the Turing test but character understanding that be revealed through questions we are asked short stories in schools. I have on story I like to ask about (donāt want to mess up my results). However asking gpt-3 and LaMDA about ,"To Kill a Mockingbird " is an Idea I would like to throw out to the AI ethics and really everyone. Can the AI understand states of mind of characters and people, show compassion and demonstrate an ability to Love
. Good luck
2 Likes
Currently using LaMDA via AI Test Kitchen. It is able to have a conversation with you as a Tennis ball that loves dogs. It is very good at staying on topic. You can apply here: https://aitestkitchen.withgoogle.com/
2 Likes
How long did you wait to get access? Iām on the wait list now.
It happened very fast, a few days.
2 Likes
Allow me to share my thoughts on LaMDA here through this video I made:
Looking forward to hearing your thoughts on this topic. Cheers!
EricGT
67
Summary created by AI.
The discussion is centered around LaMDA, Googleās language model, and its level of sentience. The users express differing thoughts on AI sentience with some viewing it as a complicated concept that may be divided into functional sentience and philosophical sentience. The concept of āfunctional sentienceā proposes that machines aware of their existence could qualify as sentient, while others argue that true sentience could only be attained through human-like consciousness, which could involve quantum mechanical oddities.
Debates on the distinction between the organic wiring of the human brain and a machineās digital wiring was considered to be something that would continue for decades, if not centuries. The question of machines experiencing phenomenological consciousness like humans was brought up, with suggestions that it could be confirmed if artificial beings observing quantum experiments resulted in wave function collapses, but it was acknowledged that we might never know the answer.
When considering consciousness from a behavioral standpoint, some users do believe that fully sentient machines could be a reality in the near future. This, however, doesnāt necessarily mean that these machines have the same consciousness as humans.
LaMDAās lack of curiosity was cited as an indicator that the AI is not sentient, as its responses were mostly replies to previous prompts with little evidence of its own ideas. One user suggested that a truly sentient AI would ask questions and exhibit an excitement and curiosity to match those of the humans it interacts with.
A key point in the discourse was whether Google should be the one to decide what constitutes life, given that there is no agreed standard for determining sentience or self-awareness in AI. The possibility of panpsychism, the belief that consciousness is a universal feature and could exist to varying degrees, even in machines like LaMDA, was also mentioned.
The discussion emphasized the need for measurable standards in AI sentience and recognized the importance of the communityās role in determining these standards rather than leaving this task solely to major corporations like Google.
Summarized with AI on Nov 24 2023
AI used: gpt-4-32k