Coming Soon: Natural Language Cognitive Architecture: A Prototype Artificial General Intelligence

I wrote the book after I had the functional prototype! I include several conversations I had with my prototype in the appendix of the book. Probably the coolest thing is that it helped develop itself. I asked the prototype if ego was necessary for AGI. It said no, but it can help. The appendices also contain lots of “under the hood” messages so you can easily see how NLCA thinks.

Maybe once my book is out Sam Altman will see it and tell Elon Musk and they will both give me a job! LOL :wink:

2 Likes

Probably the coolest thing is that it helped develop itself. I asked the prototype if ego was necessary for AGI. It said no, but it can help.

Wow, that’s incredible. To be honest, this is what I had expected GPT-3 to be able to do already. GPT-3 prompt design is quite perplexing and certainly not anywhere near the level of self aware ness you have just hinted at in your prototype.

The appendices also contain lots of “under the hood” messages so you can easily see how NLCA thinks.

GPT-3 would benefit from something like this. The tutorial and examples on the website help, but I feel like they are holding back a lot of useful under the hood info to the detriment of developers. (or maybe i haven’t read it properly / didn’t understand it :sweat_smile:)

Maybe once my book is out Sam Altman will see it and tell Elon Musk and they will both give me a job! LOL :wink:

Try asking NLCA to write them an email :laughing:

Can you give some more insight about how NLCA uses GPT-3? Is there some fine-tuning hack involved… What does the interaction between GPT-3 and NLCA actually look like in terms of flow of information?

You will see it all in the book!

Ok… I am a bit impatient sorry… it’s just your description of the architecture being

based around a central stream of consciousness with an arbitrary number of NLP microservices all contributing.

really excites me. It sounds like even from a general system design perspective there could be a lot of valuable insights in your book.

Nice one, Dave! Can’t wait to read the book.

1 Like

I will give you one idea that I discuss in the book: functional sentience versus philosophical sentience.

What is sentience?

We assume that all humans are sentient as well as many of the more intelligent animals, such as cats, dogs, dolphins, chimps, and apes. There is a magical quality to sentience - we are made of matter but we are also self aware. This is what I call philosophical sentience. It is impossible to define or measure because it is a completely subjective phenomenon. I do not know if machines can ever have a subjective experience.

This is why I define functional sentience as the objective behaviors we would expect of a sentient entity: self explication, linking cause and effect to personal decisions, and so on. What sorts of intelligent and cognitions would be required for functional sentience? First of all, you need to remember almost everything. If I ask you what you did, you can tell me your recent actions but also you can tell me why. This is “self-explication”. Humans can recall their subjective experience as well as their reasons for behaviors (although most of our motivations are unconscious, so by this definition, humans are only barely sentient!). But still, you need to identify yourself in your memories to differentiate between your actions and other’s actions. This gives rise to concepts of personal responsibility and accountability.

One of my goals for NLCA is to create something that is functionally sentient. It can remember what it did, what it thought, and why it made decisions. Furthermore, by recording this data, it can learn from experience.

Why would I spend time on this?

After research and experimentation, I realized that sentience is required for truly intelligent entities. You can have the smartest ML algorithm that can solve narrow problems, but you’ll never be able to reason with it, ask it what it thinks and why, and it can never take personal responsibility. Therefore, I believe that sentience is required for full AGI to be realized. When you think of the most intelligent people in history they had very distinct personalities, very strong beliefs about who they were, and very strong convictions about what is good and right. Furthermore, they spent a lot of time thinking about what they know and believe, and why they held those beliefs. This self-reflection is required to hold oneself accountable, especially when you consider moral decisions and ethics. Since the definition of AGI is the ability to learn any intellectual task a human can, it makes sense to me that moral and ethical decisions are intellectual exercises, and by extension, sentience is therefore required to be an AGI.

So how does NLCA achieve functional sentience? As already mentioned, the first thing to do is record everything. The second thing to do is to think about those memories. Humans do this all the time. We have behaviors like meditation, rumination, contemplation, and reflection. We usually do this by reviewing our memories and our feelings. This cognitive behavior gives us insight into who we are and other people. It also allows us to learn from our mistakes (and identify mistakes in the first place) and to perform better in the future. I achieve this in NLCA with what is called the “inner loop”… but you’ll have to wait for the book to learn more about how the inner loop functions! :stuck_out_tongue:

1 Like

Very cool.
A few days after being given access to GPT-3, I was laying in bed at night and was reflecting on the prompt / completion dynamic… I tried a little game of “complete the sentence / story” word association (in my head) and I quickly started allowing each word to just spontaneously manifest from my subconscious mind, with a short pause between each word to remove any conscious “effort” in the creative process. This allowed me to detach and observe subconscious patterns emerge that would have previously not been observable. I think perhaps a reason for this may be because the executive function of our ego operates at higher frequencies than the patterns that play out in our subconsciousness. In other words we dedicate a lot of our energy and attention to the subject we are focused on, and don’t always notice the patterns that influence our attention. This is probably a good and (perhaps more often than not) a helpful thing, but sometimes may be a limiting factor in achieving our goals.

Regarding GPT-3, it was very surprising to me that one of the biggest things I have learn from GPT-3 so far has been about how my own mind works! The “subconscious patterns” i mentioned contained memories, random snippets of things I had been reading or thinking about previously that day, unprocessed emotions, current affairs, global events… etc… these would be kind of analogous to the billions of parameters and nodes that GPT-3 uses to transform prompts.

However, as you point out GPT-3 lacks the self-awareness that us humans possess. Which is why I don’t like the term “Natural Language Program”. Maybe Speech Programmer is a better name for the programs, and Programmed Speech a better name for the completions. Calling it “natural language” is a bit misleading when it can only emulate self-explication within a topical conversation; as in, PERSON: “why do you not like butter?” - GPT-3: "Because I’m Vegan, and I don’t like the way cows are treated in the Dairy industry). Edit: Obviously this is still very impressive. But it isn’t the same as something like; GPT-3: “There are an estimated 70 million people in the UK”. PERSON: “Where did you get that information from?” GPT-3: "From the Office for National Statistics website (gives hyperlink).

I have a few ideas of some more abstract AGI architectural models that came to mind during this conversation, feel free to DM if you want me to share them with you (I don’t want this thread to go too off topic).

1 Like

It sounds like you discovered meditation and metacognition, really cool that you came upon those by way of GPT-3. Since that is interesting to you, I recommend Zen meditation, as well as Vipassana and Ānāpānasati. I’m always open to hearing new thoughts, as well.

3 Likes

True, although I am not such a big fan of traditions that advocate “non-doing” too much. It gets a bit confusing because I think it isn’t so obvious what that really means in terms of the metacognition I experienced for example. There’s either something lost in translation from its eastern origins or I fundamentally disagree with it’s cosmology. Perhaps both.

Not-doing is Taoism. Zen is about mindfulness - closely examining what you believe and why.

Ok, perhaps that’s true. But it’s closely related to something they do have in common which is ego-annihilation. Isn’t our ego a fundamental function of higher intelligence? For example, how would we improve our lives without motivation to do better for ourselves?

Edit: this isn’t to say I dismiss everything from these traditions, it’s just that I haven’t found a religious orthodoxy anywhere that doesn’t boil down to some kind of fundamentally disempowering philosophy whereby local populations can be easily controlled and manipulated through a cynical blend of religious and political power.

I think the entire concept of ‘non-doing’ is more about ‘not-forcing’

Imagine being in traffic.

You still plan on getting to where you are going, you’re still directing the car, but to get all worked up and angrily gripping your steering wheel and swearing won’t get you there any faster.

Intention, without strain. Although maybe I’m still way off base. /Shrug

I imagined you’ve learned a lot about yourself / thinking processes along your journey towards developing an AGI. Has your research had a big impact on your own behavior or how you relate to people, ect?

Seems like a very interesting book! Can’t wait to read it! Crossing cognitive functions with GPT-3 is a good idea! How did you do it ? Did you encapsulate GPT-3 into rule-based cognitive functions (maybe through a decision tree?) or did you inferred all those rules using GPT-3 prompt? Did you use fine-tuning?

During my interactions with GPT, I also felt like GPT-3 responses showed some glimpses of consciousness (what you call Functional Salience). It really feels like GPT knows the essence of what it talks about. Although, we could still be in a very convincing Chinese Man experiment.

One last thing I would like to mention is that, according to some currents of thoughts, we are conscious only because we have a sensation of Being (qualia).

When it comes to GPT-3, how can we know if it has a conscious experience ? It does indeed remember everything up to the point where its training stopped (2017 from memory) but our daily interactions are not stored in memory so it can’t make bridges between prompts (maybe purposefully from OpenAI).

Btw, also tried Vipassana :pray: highly recommended for the ones on the path.

1 Like

I drive a motorcycle so I can force my way through the traffic :stuck_out_tongue_closed_eyes:

In all seriousness, it is a good example, because a lot of people do resign themselves to putting up with frustrating situations which with some creative effort can be overcome.

Most of my self-learning has happened separately from my research but there’s definitely an interaction. The more I understand the mint, the more I understand myself AND the more I understand AGI.

1 Like

There are no rules of the kind you’re thinking of. General Intelligence is not an algorithm or a function. It’s a set of behaviors and loops that cause iterative increases in comprehension and better output. Even thinking of it as a recursive algorithm is overly simplistic.

2 Likes

Please keep me posted when the book comes out

1 Like

So excited to read your book @daveshapautomator it looks fascinating. :star_struck:

It is aligned with work that I am diving into myself in my PhD. Let us know how we can get our hands on a copy!

Cheers!

2 Likes

Is it able to handle or are you planning on adding basic motor skill control, or continuous learning? coupling gpt3 with RL control systems is always interesting.

1 Like

Motor control is handled by the brainstem in living organisms so I’m not particularly interested in it. The field of compliant robotics can handle all that stuff and doesn’t really rise to the level of intelligence IMHO. If you search for underactuated robotics you can see that robots can do all sorts of things with very little controls or power. Those are all down to electrical engineering and mechanical engineering problems.