I think AGI will need proprioception

I’ve been working on a public demo version for NLCA and something occurred to me - AGI will likely need some kind of proprioception. All other intelligent entities can sense their bodies. You have some idea about how your body is physically oriented in space, as well as what’s going inside your body as well as your mind. This is how your hands can find each other in the dark, and how you can scrub your face with your eyes closed.

Now, an AGI might not have a corporeal body, but it will have software, data, and underlying hardware. In the case of the AGI I am working on, it is composed of microservices. In my book, I did not make any recommendations about having those microservices report their status into the actual stream of consciousness of NLCA. As I work on this new version for demonstration, I am realizing that self-explication is limited if it doesn’t have some kind of proprioception. For instance, I am constantly having to look through the microservice logs to understand what NLCA is thinking and saying and why. But this level of self-awareness should be granted to the AGI so that it can understand and explain itself.

When humans are embryos and fetuses, our brain is literally learning about the hardware it is connected to. We have some genetic knowledge baked into our design - certain things are intrinsically predisposed by evolution. For instance, we are born with a few emotions, the ability to scream, to eat, etc. But the brain, as a type of CPU, is general purpose enough that it must learn to use its peripherals. This is why infants can’t grasp anything and even have a hard time looking at things - they literally do not possess the hardware drivers yet! Imagine if your computer had to develop its own hardware drivers simply by experimenting with its peripherals.

Organisms are constrained by the laws of physics and chemistry, plus we have to survive in a harsh world. This is why proprioception is so important - we must know our limits and abilities, plus we need to be able to monitor our state for problems like hunger and pain. But why would an AGI need the ability of proprioception? The AGI would not necessarily have a body, hunger, thirst, or disease. Sure, you might put an AGI in a robotic chassis, in which case it should be aware of things like its battery charge. But what about the software? It has occurred to me that metacognition - thinking about thoughts - is not just a handy thing to have every now and then. Metacognition is required for the highest levels of intelligence. Proprioception in an AGI, I think, would approximate metacognition.

Metacognition allows us to be aware of our own internal operation. You can think about a problem and realize that you don’t have enough information to solve the problem. Furthermore, you can be aware of other issues, like frustration or anger. While I do not recommend endowing AGI with frustration or anger, it should still be aware of when it cannot make forward progress on its tasks. This awareness would require the ability to evaluate its memories to realize it has been working on the same problem for 6,301 seconds. Another example - we all have the experience where we know that we know something, but we cannot recall it. Sometimes we can’t remember a name or a word. Other times we have to “find our way back” to episodic memories. In these cases, it’s almost as though there are microservices in our brain giving back HTTP error codes like 404 FILE NOT FOUND. These error codes make it back into our consciousness and I think that AGI should operate in the same way.

I will not work on integrating proprioception just yet. First I need to get the demo code ready. It is most of the way there and it runs on CURIE so it is fast and cheap (but also kinda dumb). I still have some troubleshooting and testing to do, some details to iron out. But once I finish the demo code and post it publicly, I will begin work on proprioception.

Final thought - even in fiction, we always imagine that AGI will have some kind of proprioception. Superintelligent machines are often depicted as being aware of how much data they hold, how much processing speed they have, and explicit knowledge of their inner workings. So even in our imaginations, we believe that AGI must have proprioception.

2 Likes

wen open-beta? :smiley: :smiley:

2 Likes

By the end of August! I’ll post some YouTube videos as well, demonstrating and explaining each component.

1 Like