Testing theory of mind in GPT-3 - a critical step towards full AGI

Theory of Mind - the ability to attribute mental states and understand the contents of other minds.

6 Likes

So what would be the turning point to say: We’re done, this is an AGI?

1 Like

One problem is that when we invent something smarter than ourselves we’ll have a hard time recognizing it.

2 Likes

Hey Dave, interesting idea, but I suspect you are jumping to some conclusions here, that may not be fully supported by your experiments (as far as you show them in the video). It looks like you are trying to verify your theory about the cognitive capabilities of GPT-3 rather than trying to falsify them, which would probably be a better approach, considering your premises. There may also be other explanations for the answers your input generates.

You can’t prove a negative. I’m merely providing evidence for my beliefs.

Here’s a more complete answer to your question @NSY

2 Likes

Very true. It’s almost akin to finding a higher dimensional being/object than ourselves.

1 Like

Coming from a C.S. background, as I experiment in search of a replicable result, I don’t see why you wouldn’t prove a negative. A good example would be that you can prove a specific negative claim by providing contradictory evidence. An example of a proof of a rather specific negative claim by contradictory evidence would be if someone were to claim that the one and only watch that you own is in the top drawer of the desk. You make the negative claim that it is not in the drawer and you see it clearly on your wrist. There is no need to look in the drawer.

Okay. How would you go about providing evidence that GPT-3 does not possess theory of mind? The question is not whether or not it has this ability, but to what extent and what are the limitations.

1 Like