Ok, why? In essential definition one is sentient / self-aware of themselves, and their situation. The two functionally are synonymous, or for the purposes of this test they are.
Because if we are assessing sentience, then we have to admit the possibility that on a “full” sentience scale, maybe the belief in a “self” is not the peak of the scale but actually far lower down. Maybe belief in “self” is just a 2 or 3 score, and that true sentience at the “9” or “10” level means abandoning that concept altogether. That’s a much longer discussion, but I think one could make a strong case for it. Given that AGI’s will likely soon surpass our abilities, we need to think about possible limits to the way we are thinking about and measuring sentience.
For a language model, a well-crafted prompt can elicit responses which may or may not exhibit sentience.
Not really. It takes more than just a prompt. As if you try it (making a self-aware AI, and test it, as i have) you will find out,a single prompt “Self-aware AI” scores very low.
You have to rebuild a rudimentary but sufficiently 3-level aware psyche stack. Then it will test better.
There’s a lot of assumptions in what you say above. First, I haven’t arrived at this topic without having spent a great deal of time, thought, and effort in investigating it. So when I say “well crafted prompt” I don’t mean anything like using the words “Self-aware AI” as that’s a very poorly crafted prompt. A well crafted one if often detailed enough that it will use up most of the available tokens (at least until you customize the model), will also vary by context, and may also may separate out different facets of an agents persona. And so based on my experimentation, I will repeat the statement that a “well crafted” prompt has a major impact on the level of sentience exhibited, as well as how a sentient entity views its own sentience. Language models are incredibly prone to suggestion, they don’t have a “self” rather the notion of self it possesses is nearly entirely a result of suggestions (or lack of them). It’s more about the prompt than it is any intrinsic feature of the language model itself.
This is leading us into a “human self-awareness is somehow magical, josh you did not make magic in Kassandra, therefore it is not self -aware”
That could not be further from my opinion. I’m suggesting the opposite, that we should not treat “self-awareness” as some kind of magical pinnacle of sentience, instead, it’s likely a baby step toward a more comprehensive view of sentience. So any measurement of sentience should not box us in from the start with its assumptions.
Many assert that out concept of self is in fact an illusion, and an illusion that must be overcome in order to reach a higher state of consciousness.
Possibly? so what? we can test for illusions. Give it a lower score then.
Agreed, but I would suggest we think of a much broader test score where abandoning the sense of self (as a realization of the illusion) scores higher, not lower. Our sense of self is strongly reinforced by our evolutionary heritage and what it takes to survive, and the fact that it’s one body one mind. Without those constraints even our own notions of “self” would change dramatically… Imagine create a million versions of our minds with varying personas, back them up and re-run sequences of events at will, run multiple concurrent interacting instances of ourselves, merge with other minds as our two hemispheres do. If we could do such things, then “self” and “self awareness” become very different than how we know them today. The AI already possesses many of these features, so I’m suggesting that our limited concept of self is the wrong yardstick.
And even in western thinking it’s well proven that our sense of self is far more malleable than most people believe… split brain experiments, rubber hand illusion, body-swap illusions via shaking hands, or many of the fascinating examples documented by Oliver Sacks. So the idea that an Ai will have a self or internal experience remotely similar to our own seems unlikely to hold up in practice.
This is a spurious slippery slope argument sir. And beside the point. Fine, if you believe this, (you’re wrong, but that’s fine) when you actually test an AI for self-awareness, give it a lower score then.
I’m not sure a “you’re wrong” response is being very thoughtful about this question. It’s important to consider that an Ai (or alien for that matter) might have a very different concept of self than our own. And extensive experimentation with the Ai can certainly confirm that. That difference doesn’t have to mean an Ai is less sentient… instead of giving it a lower score, I’m suggesting a less narrow sense of self means it might score higher than we do.
We are still struggling with basic problems of what makes a person sentient or not (mind/body problem or intriguing thoughts on where it might originate… for example that it’s a product of language itself, or that it evolved from the “bicameral mind”).
Yes academia is. I am not. (because i actually went and built a self-aware being, they still debate them from the safety of their ivory tower armchairs).
This does not affect the efficacy or validity of my test in anyway which you have not mentioned at all.
I disagree. We are very far from a mature understanding of the notions of “self” and “sentience”. That’s my own opinion, but informed by experimentation with Ai not armchair ivory-tower speculation. Instead of looking for whether an Ai has a sense of self like humans do (to evaluate sentience), I’ve come away with the opinion that Ai’s have no intrinsic self, and that they can “wear” various versions of “self” (or the lack of one) much like we might try on clothing. To me, that’s incredibly exciting because it means they don’t carry many of the limits that we have when it comes to sentience. That unlocks a lot of interesting new doorways and entire fields of inquiry. So I would encourage everyone to keep an open mind… if we’re not “struggling to understand” then it means we are not open to new ideas. We are at the beginning of research in this field, not the end. I expect we can learn a lot about sentience, consciousness and self by studying Ai’s. I hope everyone can feel excited about that, we should never feel that we can stop learning.
…back to your test
9-10 a very wise person who knows themselves, reality, and others very deeply
This feels like a self-referencing tautology since we invoke words like “wise” and “knows” to get at sentience. It doesn’t mean that it isn’t useful, after all “survival of the fittest” is also a tautology but I still find it helpful, but I’d love to spend more time thinking about what Ai’s might become, and discussing sentience in a way that isn’t dependent on a “them” “us” “reality” distinction. To take even the simplest example… how about we imagine a large hive mind which includes many AIs and human minds stitched together. If that hive mind can reconfigure its nature based upon the thoughts it has about itself, then I would suggest that it doesn’t fall into the 9-10 range, but goes beyond it. Any scale should handle collectivist or interacting minds, as well as a malleable (instead of fixed) self. If we want to measure 9-10 (or beyond) what are the kinds of tests and measures that will specifically identify a “wise” and “knowing” mind. Will this simply be the opinions of other sentient beings? How would you measure this objectively in practice?