Help! Testing Humans for a Baseline


I am delivering my first SELF-AWARENESS test today to a bunch of (poor, unsuspecting) humans


please read these questions and lmk if you think they are fair or unfair questions to determine whether an entity is self-aware (whether human or AI) - note many of them are trick questions, how they answer is just as important as what they answer

Josh - I took your test, but I think there are important reasons why what you are trying to measure can actually not be measured with a test: (i) there is no point of reference, (ii) there is no way of knowing the intent of respondents, (iii) the premise that AI self-awareness can be recognized by us is weak.

Good luck though.

Thanks for the feedback.

Sample size fixes all of what you have mentioned.

And you don’t know i cannot detect it until i try :slight_smile:

Just did the test, but on a philosophical note, how are you expecting to test for self awareness for ai if you cannot come close to verifying that your fellow humans are self-aware/conscious? Tis a leap of faith (that you must make to engage in society and not be a mentally ill recluse). The best you could do is predict if a set of answers to a set of questions is very similar to another set of verified human answers. Where is the test of self-awareness?