Coming Soon: Natural Language Cognitive Architecture: A Prototype Artificial General Intelligence

I drive a motorcycle so I can force my way through the traffic :stuck_out_tongue_closed_eyes:

In all seriousness, it is a good example, because a lot of people do resign themselves to putting up with frustrating situations which with some creative effort can be overcome.

Most of my self-learning has happened separately from my research but there’s definitely an interaction. The more I understand the mint, the more I understand myself AND the more I understand AGI.

1 Like

There are no rules of the kind you’re thinking of. General Intelligence is not an algorithm or a function. It’s a set of behaviors and loops that cause iterative increases in comprehension and better output. Even thinking of it as a recursive algorithm is overly simplistic.

2 Likes

Please keep me posted when the book comes out

1 Like

So excited to read your book @daveshapautomator it looks fascinating. :star_struck:

It is aligned with work that I am diving into myself in my PhD. Let us know how we can get our hands on a copy!

Cheers!

2 Likes

Is it able to handle or are you planning on adding basic motor skill control, or continuous learning? coupling gpt3 with RL control systems is always interesting.

1 Like

Motor control is handled by the brainstem in living organisms so I’m not particularly interested in it. The field of compliant robotics can handle all that stuff and doesn’t really rise to the level of intelligence IMHO. If you search for underactuated robotics you can see that robots can do all sorts of things with very little controls or power. Those are all down to electrical engineering and mechanical engineering problems.

Having meditation and mindfulness experience helps a lot in this field of AI. Even a couple of shr00m trips create a wider perspective on cognitive processes.
Also – it is kinda funny how some people go through heap of advanced tech concepts, only to reach “classic” spirituality in the end

Yes, the most critical thing is the self awareness about the assumptions and beliefs we carry with us.

1 Like

UPDATE: UPS lost my proof in shipping so it got delayed until at least Monday 8/2 :sob:

I ordered a backup copy just in case, but if all goes well then I can make sure the proof looks good on Monday and then it will be available for purchase!

2 Likes

Excellent, will be interesting read.

1 Like

Congratulations! From your vantage point, how long will the principles and models that we can find in the book apply for? Given the pace of change and progress, it must be hard to structure longeve content. Or is it?

2 Likes

I’ve recently seen Facebook post a few things that resemble cognitive architectures but they don’t rise to the sophistication of my work. I’ve known for a while that it’s only a matter of time now before many people arrive at the same conclusions I have - I’m just a little ahead of the curve :wink:

I think there will always be room for a language-only AGI but I also think that multimodal integration will be big. In this, I predict we will soon see multiple types of AGI - some that “think” only in language (like mine) and others that “think” with sounds/images/3D/etc. Each type will probably have different strengths and weaknesses.

I hope my book, in the long run, is viewed more like Darwin’s On the Origin of Species - that work over the coming decades will simply reinforce and add understanding to what I’ve produced. I think that ultimately my book will be seen as a simple starting point.

4 Likes

Really looking forward to read your book. I’m also working on a chatbot with advanced capabilities, such as reasoning and short/long term memory. In my experiments with GPT3 I found that it was quite capable of deductive, inductive and abductive reasoning. It is also capable of asking the right questions to extract external knowledge. But what GPT3 lacks is memory. It can only remember what it was trained on. It has no idea about things happening between when it was trained and today. Facebook tried to alleviate this problem in their BlenderBot 2.0 by incorporating internet search in it. But when playing with it, I found that it greatly depends on the quality of the search engine, what information it can provide.
I think that memory should not just be a database of all the utterances. Because the utterances are noisy and they are split in time. This makes it hard to extract relations and knowledge from them. I believe that Knowledge Graphs is the way to go when we want to model long/short memory. But they are harder to implement. You have to extract concepts and relations from utterances and save them in KB (intermediate KB for short memory). Then reconcile it into the bigger KB (long memory). But you have to also disambiguate any knowledge that doesn’t stand proof. And from here it becomes interesting, as I’m not sure yet how to do that. :slight_smile: I mean how to decide about what should be “valid” knowledge to use.

4 Likes

I really agree with you @pkulko about the need for working memory.

I am intrigued by how you would see Knowledge Graphs be implemented into LLMs? Could you tell us a little more about your ideas on that?

I’ve had extensive conversations with some people about KGs and my personal conclusion is that a KG is just a different format of the internet aka World Wide Web. A KG is just a web of documents… which is exactly what the internet is. Also, as you get into the realm of millions or billions of memories for an AGI, a KG becomes untenable. Instead, I recommend sticking with search engines such as SOLR, ElasticSearch or OpenSemanticSearch. Even PostgresSQL has full-text-search capabilities today and I think it will only be a matter of time before all of these integrate vector-based semantic search.

I will never tell anyone they are wrong because I have been wrong many times! However, right now I do not see KGs as the way of the future for knowledge-management in AGI. I do agree that asking the right questions and quality of search engine are critical for extracting knowledge from a database. Keep in mind we also have finetuning now. I suspect that we will see specific QA (question answering) versions of GPT-3 before long that are merely trained on large corpuses of facts, data, and knowledge.

I am presently working on a “Question Asking” finetuned model which could then be paired with the “Question Answering” finetuned models. I suspect this will be a far more powerful direction. This is just what I think based on what I know.

1 Like

Just so I’m clear the scenario is your prompt, as is “Moral Questions:” then all bulleted text is GPT-3?

I really like this idea of playing around with question generation.

You can condition GPT3 prompts on any external data so that it can incorporate that data into its responses. I just have an ensemble of different systems (KGs, Semantic Search, Generator). They are not deeply integrated with one another but exchange information.

1 Like

Moral questions are just one kind. You can also ask GPT-3 to generate legal, medical, scientific, emotional, personal, and scientific questions.

2 Likes

I think this distinction between philosophical and functional sentience is extremely important, both in pursuit of and in determination of success. I have a question about the thoughts you have kindly shared.

Humans don’t recall probably most of their subjective experiences. At least not consciously. Hyperthymesia is both rare, and often negatively impacts life. Unfortunately, it would seem to me that human memories are highly fallible. Sometimes it is important for human health to forget - i.e. pain, PTSD, information overload etc. Yet, I think most would agree we are sentient, and most people have some type of sense of responsibility or accountability (whatever that means to them). If you create something that remembers everything equally is that really the critical step toward functional sentience? Would not context, subjectivity, and standpoint be just as critical (if not more) in exhibiting sentient behaviour?

I do agree that memory is a key part of the puzzle. I think though, that these parameters of relativity (i.e. context/stand-point) are also important. With the “inner loop” are you referring to n-loop** learning (after Argyris and Schön?). I think that type of cybernetic approach would be highly useful; it would help connect memory with relativity concepts.

Anyways, super looking forward to your book @daveshapautomator :smiley:

2 Likes