I will give you one idea that I discuss in the book: functional sentience versus philosophical sentience.
What is sentience?
We assume that all humans are sentient as well as many of the more intelligent animals, such as cats, dogs, dolphins, chimps, and apes. There is a magical quality to sentience - we are made of matter but we are also self aware. This is what I call philosophical sentience. It is impossible to define or measure because it is a completely subjective phenomenon. I do not know if machines can ever have a subjective experience.
This is why I define functional sentience as the objective behaviors we would expect of a sentient entity: self explication, linking cause and effect to personal decisions, and so on. What sorts of intelligent and cognitions would be required for functional sentience? First of all, you need to remember almost everything. If I ask you what you did, you can tell me your recent actions but also you can tell me why. This is “self-explication”. Humans can recall their subjective experience as well as their reasons for behaviors (although most of our motivations are unconscious, so by this definition, humans are only barely sentient!). But still, you need to identify yourself in your memories to differentiate between your actions and other’s actions. This gives rise to concepts of personal responsibility and accountability.
One of my goals for NLCA is to create something that is functionally sentient. It can remember what it did, what it thought, and why it made decisions. Furthermore, by recording this data, it can learn from experience.
Why would I spend time on this?
After research and experimentation, I realized that sentience is required for truly intelligent entities. You can have the smartest ML algorithm that can solve narrow problems, but you’ll never be able to reason with it, ask it what it thinks and why, and it can never take personal responsibility. Therefore, I believe that sentience is required for full AGI to be realized. When you think of the most intelligent people in history they had very distinct personalities, very strong beliefs about who they were, and very strong convictions about what is good and right. Furthermore, they spent a lot of time thinking about what they know and believe, and why they held those beliefs. This self-reflection is required to hold oneself accountable, especially when you consider moral decisions and ethics. Since the definition of AGI is the ability to learn any intellectual task a human can, it makes sense to me that moral and ethical decisions are intellectual exercises, and by extension, sentience is therefore required to be an AGI.
So how does NLCA achieve functional sentience? As already mentioned, the first thing to do is record everything. The second thing to do is to think about those memories. Humans do this all the time. We have behaviors like meditation, rumination, contemplation, and reflection. We usually do this by reviewing our memories and our feelings. This cognitive behavior gives us insight into who we are and other people. It also allows us to learn from our mistakes (and identify mistakes in the first place) and to perform better in the future. I achieve this in NLCA with what is called the “inner loop”… but you’ll have to wait for the book to learn more about how the inner loop functions!