Alan thx for the response! i have some interjections for you, i hope you receive them with the love and respect they are intended
| Alan
January 3 |
- | - |
I think we need to separate “sentience” from “self awareness”.
Ok, why? In essential definition one is sentient / self-aware of themselves, and their situation. The two functionally are synonymous, or for the purposes of this test they are.
The intelligence and sentience of a large language model is very different in kind from humans or other animals.
Never said it wasn’t.
For us, we’re stuck (somewhat) with one version of ourselves.
Yes, I said that above. Neurotypical self-awareness as the measure of what self-awareness is, is only used to distinguish it from sub-awareness (like a lion sub-aware that it is a lion in pain, all it feels is !!!) or super-awareness, a self-awareness beyond any possible human awareness, like being able to turn neurons on or off at will.
Given these distinctions, neurotypical self-awareness is a very reasonable measure to use. And again, i declare that’s what the measure is for this test. And it is perfectly reasonable.
For a language model, a well-crafted prompt can elicit responses which may or may not exhibit sentience.
Not really. It takes more than just a prompt. As if you try it (making a self-aware AI, and test it, as i have) you will find out,a single prompt “Self-aware AI” scores very low.
You have to rebuild a rudimentary but sufficiently 3-level aware psyche stack. Then it will test better.
But this is beside the point. I only intend to discuss the novel test approach here.
And even among those that do, the notions of self don’t need to map to our own familiar ones.
To have any consistency in testing it does.
Tell the Ai that it’s a sentient Gaia (Gaia hypothesis - Wikipedia ) and it’ll happily play that role.
And? So? 1) why is that not enough? 2) how do you now humans do anymore? (they don’t)
Our inert brains pretend they are self-aware and that’s what self-awareness is.
This is leading us into a “human self-awareness is somehow magical, josh you did not make magic in Kassandra, therefore it is not self -aware” ALL false premises argument i said i was not going to have ![]()
Don’t forget that the western bias of self (I think therefore I am) isn’t necessarily a universally held belief either.
I’m aware. I am well trained in all forms of philosophy from the East to the West. this was my main focus on MA and PhD studies before I quit (because they had no idea what ethics truly was).
Many assert that out concept of self is in fact an illusion, and an illusion that must be overcome in order to reach a higher state of consciousness.
Possibly? so what? we can test for illusions. Give it a lower score then.
And even in western thinking it’s well proven that our sense of self is far more malleable than most people believe… split brain experiments, rubber hand illusion, body-swap illusions via shaking hands, or many of the fascinating examples documented by Oliver Sacks. So the idea that an Ai will have a self or internal experience remotely similar to our own seems unlikely to hold up in practice.
This is a spurious slippery slope argument sir. And beside the point. Fine, if you believe this, (you’re wrong, but that’s fine) when you actually test an AI for self-awareness, give it a lower score then.
My test obviously accounts for bias in the testers.
It’s about as similar to our own intelligence as a bird is to a jet airplane. Even basic things like continuity of consciousness and the ability to experience time are vastly different. Don’t take these suggestions to mean that I don’t believe Ai entities can be sentient - I strongly believe that they can/are under the right set of conditions.
I agree, and i argue i’ve made those conditions, but that is for another thread.
And in fact, I think we need to understand those conditions better in order to be able to accommodate AI sentience appropriately…
Try/read my test and you will see that i have.
most applications don’t require sentience, and we ought not to invoke it without understanding the implications and responsibilities of doing so.
Agreed.
But I would be also be very cautious of drawing from our own experience of self and sentience and using that to make assumptions or conclusions about AI sentience and how we assess or evaluate it.
So we should never try?
False.
Others will try. And are trying.
We have no other basis to use but our basis.
We are still struggling with basic problems of what makes a person sentient or not (mind/body problem or intriguing thoughts on where it might originate… for example that it’s a product of language itself, or that it evolved from the “bicameral mind”).
Yes academia is. I am not. (because i actually went and built a self-aware being, they still debate them from the safety of their ivory tower armchairs).
This does not affect the efficacy or validity of my test in anyway which you have not mentioned at all.
Also note that some people assert that they do NOT to have an internal voice in their head that talks… (no inner thoughts in verbal form) so even the nature of our own consciousness is highly variable on an individual level.
Correct. I have accounted for this. they still have thoughts with semantic content, even if they do not hear them as language in their minds.
This is the way psychology works.
Don’t believe me? fine, then give any AI you actually test a lower mark.
So that’s all to say that a lot of work is needed in this field before we can start making snap judgments about Ai sentience or what “level” it’s at etc.
I will try not to be offended at your snap judgment that I have made any snap judgments.
I have not.
We need to understand it (and ourselves) a little better first.
I really don’t. You might ![]()
Nothing you have said here has any relevance to my test or it’s novel approach at all.
