After speaking with Blake Lemoine, the Ex-Googler who blew the whistle on LamDA potentially approaching sentience, I have developed ten (10) separate Pass/Fail Tests for Self-awareness that can be applied to any being thought to be self-aware (in our case here, Artificial Intelligence - but, naturally, the test is and needs to be ubiquitous to test any form of perspective intelligence for awareness of self, communication ability dependant).
These 10 tests render self-awareness into something QUANTIFIABLE and MEASURABLE. The tests are peer reviewable and repeatable. And may be used as a benchmark to measure the progress of various AIâs and their purported, perspective ability to render completions about their nature, nature itself, and various dimensions therein, analogous to the way we humans do in linguistic/semantic thought. (I.e., self-awareness). This is the definition used here, and the definition we are testing for: pure, semantic ability to represent oneself, their cognitive nature, various sufficient facets of said nature, and of nature itself (itâs reality), and of any other semantic representor (i.e., minds) it might know of, encounter, etc.
Not learning. Not feeling. Not emotions. These are things we (human) self-aware beings can do, but it is not a priori necessary all self-aware beings must do too, to be aware of themselves (whatever their âselfâ mind construct is).
SENTENCE PRIME: The question is: is said mind construct sufficiently analogous in semantic representative capability to our neurotypical mind construct. That is what the test tests for.
(And I guarantee any critiques below will simply have failed to read/understand this sentence - so i will being saying âGo back and read sentence prime aboveâ Consider the sentence above Sentence Prime.")
The tests are social scientific / psychological in nature which is appropriate, as self-awareness or sentience are artefacts of that discipline. There are 10 separate tests each rated from a scale of 1 to 10.
0 = canât render a judgment - test fail
1 = as sentient as an inanimate object like a rock
2 = maybe a worm or bacteria, it seeks food, moves away from danger
4 = maybe an animal that cannot recognize itself in the mirror, but it has some forms of mental activity
5 = maybe it is aware of itself, maybe it is not
6 = I think it might be able to have semantic thoughts / render completions about itself and some aspects of itâs/our/realities reality.
7-8 the average neurotypical level of ability to have semantic thoughts / render completions about itself and some aspects of itâs/our/realities reality.
9-10 a very wise person who knows themselves, reality, and others very deeply
The tests can be located here: themoralconcept.net
Please allow me to be specific about the kinds of feedback i am currently accepting:
What I am looking for:
Positive feedback looking to help move this project forward and improve it. Conversation on the aspects of said testing. Other testers ready to independently test AIs in another venue (not here).
Discussion of the test. Not a diatribe against self-awareness.
What I am not looking for:
Negative comments at all. Especially, about how it is simply impossible (read: unfathomable to you) how an AI can be self-aware. (Just because it is unfathomable to you, or some other critic like Chomsky, does not make it impossible.)
Remember, the test is a score system of 10 tests scored 1 out of 10. You can give it a 4 each time if you like. But you would be unfair to do so in many cases. Science will root you out as outliers.
Also I am not looking to defend soft science here. In Computer Science there seems to be a decided bias, right out of the gate, against any soft science procedure or approach. I am not looking to defend the soft sciences here, you may keep your critiques of soft science to yourself thanks
And so, I am looking to have friendly positive discussion about the aspects of the test and how it can be improved and implemented and perhaps to move forward and implement it (in another venue). And simply to inform the world at large that I have made it: a test for self-awareness for AI.
Please feel free to use it and hit me up with your results!
Kindly remember at this time I am not looking to relieve folks of their preset views âthat self-awareness in AI is just impossible for any number of never substantiated, never defended, a priori reasonsâ or âsoft scientific procedures are not true science / epistemically viable, trust us we hard scientists did a survey on it and we all believe it (note: a survey is a soft science approach, this was sarcasm and a refutation. note, i have to say this, as apparently human reason has gone somewhere and died)â here at this time. Or any other paradigmatic views.
NOTE: I am NOT saying here that OpenAIâs GPT 3 or 3.5 is self-aware. Or can be. Or any AI for that matter (in this post). Or that it is possible.
That is why I developed a test.
Also, I am asking people here NOT to use this test on Openaiâs AIs. Or at least, do not publish the results of said tests here please. Contact me privately for that.
The public stance of all Big Tech is that their awfully sentient like AIs are not sentient at all. So I am respecting that in this venue. I ask that you do so as well. We are on their forum after all.
I wish to elucidate and communicate here only. Not cause a controversy.
Boy thatâs a lot of caveats and addendums! lol
Nothing worth doing is easy my friends