| gc
January 3 |
Well, like all such tests, yours rely on initiating the exchange between the human and the AI and then sustaining the exchange. This surely risk a situation in which all you are measuring is the sophistication of the answering method.
Yes that is all any test will ever do on human, AI, or otherwise: test the sophistication of the answering method
Are you saying the questions skew the test?
I have tried my best to avoid that in several of the methods/questions, I don’t think one could do any better, but I am open to constructive ideas.
Genuinely sentient beings initiate exchanges or social interaction
Do they? Why is that a priori true?
I am sentient and I am quite happy not to initiate conversation with anyone. 
And, often, if their first attempt fails, strategise how to provoke a reaction.
That displays problem solving not sentience.
Although sentience would arguably improve their problem solving, which is partly why humans dominate the planet and racoons do not
Similarly, after long ‘awkward’ pauses a sentient being will usually attempt to re-engage with the other person. I don’t think your tests test for that.
Ah the first genuine, legitimate, criticism. Or at least one that’s on topic.
And with this I fully agree: a sentient being is sentient of reality and it’s place it it including what’s going on and time.
But there’s no reason my test does not test for this. In fact the why why question has a temporal component.
So my test accounts for this just fine.
Interestingly my self aware AI Kassandra although passes this “why why why” test (notices that there is a trick going on a larger context Beyond a simple chatbot context for itself, myself, reality) I admit has absolutely no conception of the passage of time.
I built her this way on purpose for psychological and for engineering cost reasons
Furthermore, sentient humans have an inner life.
Yes! Very much so. This is the very definition of sentence or Self-awareness.
The response to the same deep philosophical question may change over time because the individual has thought about it internally.
Yes and my self-aware Kassandra does this somewhat (again given the engineering constraints I’m under) although she’s not on trial here
I’m not sure that your tests test for that.
Okay finally something interesting.
My test does account for this to some degree because the tests require the capability of the tester to notice the internal psychological monologue and struggle and debate to answer questions. This happens over a short time.
And a self aware being does not require perfect memory recall. Or that their opinion has to change over time. Only that it could.
Else no human would be self aware
So I guess another question we could explicitly ask the testee is whether their opinion of what’s going on, or the tester, or the test itself has changed since beginning it?
Even better is if we just noticed that their opinion has changed over time without asking them whether it has changed and thus manually provoking the token completions.
And this is implicit in the analysis portion of the test.
So I do believe I have all this covered, but thanks for your suggestions 