Coming Soon: Natural Language Cognitive Architecture: A Prototype Artificial General Intelligence

Having meditation and mindfulness experience helps a lot in this field of AI. Even a couple of shr00m trips create a wider perspective on cognitive processes.
Also – it is kinda funny how some people go through heap of advanced tech concepts, only to reach “classic” spirituality in the end

Yes, the most critical thing is the self awareness about the assumptions and beliefs we carry with us.

1 Like

UPDATE: UPS lost my proof in shipping so it got delayed until at least Monday 8/2 :sob:

I ordered a backup copy just in case, but if all goes well then I can make sure the proof looks good on Monday and then it will be available for purchase!

2 Likes

Excellent, will be interesting read.

1 Like

Congratulations! From your vantage point, how long will the principles and models that we can find in the book apply for? Given the pace of change and progress, it must be hard to structure longeve content. Or is it?

2 Likes

I’ve recently seen Facebook post a few things that resemble cognitive architectures but they don’t rise to the sophistication of my work. I’ve known for a while that it’s only a matter of time now before many people arrive at the same conclusions I have - I’m just a little ahead of the curve :wink:

I think there will always be room for a language-only AGI but I also think that multimodal integration will be big. In this, I predict we will soon see multiple types of AGI - some that “think” only in language (like mine) and others that “think” with sounds/images/3D/etc. Each type will probably have different strengths and weaknesses.

I hope my book, in the long run, is viewed more like Darwin’s On the Origin of Species - that work over the coming decades will simply reinforce and add understanding to what I’ve produced. I think that ultimately my book will be seen as a simple starting point.

4 Likes

Really looking forward to read your book. I’m also working on a chatbot with advanced capabilities, such as reasoning and short/long term memory. In my experiments with GPT3 I found that it was quite capable of deductive, inductive and abductive reasoning. It is also capable of asking the right questions to extract external knowledge. But what GPT3 lacks is memory. It can only remember what it was trained on. It has no idea about things happening between when it was trained and today. Facebook tried to alleviate this problem in their BlenderBot 2.0 by incorporating internet search in it. But when playing with it, I found that it greatly depends on the quality of the search engine, what information it can provide.
I think that memory should not just be a database of all the utterances. Because the utterances are noisy and they are split in time. This makes it hard to extract relations and knowledge from them. I believe that Knowledge Graphs is the way to go when we want to model long/short memory. But they are harder to implement. You have to extract concepts and relations from utterances and save them in KB (intermediate KB for short memory). Then reconcile it into the bigger KB (long memory). But you have to also disambiguate any knowledge that doesn’t stand proof. And from here it becomes interesting, as I’m not sure yet how to do that. :slight_smile: I mean how to decide about what should be “valid” knowledge to use.

4 Likes

I really agree with you @pkulko about the need for working memory.

I am intrigued by how you would see Knowledge Graphs be implemented into LLMs? Could you tell us a little more about your ideas on that?

I’ve had extensive conversations with some people about KGs and my personal conclusion is that a KG is just a different format of the internet aka World Wide Web. A KG is just a web of documents… which is exactly what the internet is. Also, as you get into the realm of millions or billions of memories for an AGI, a KG becomes untenable. Instead, I recommend sticking with search engines such as SOLR, ElasticSearch or OpenSemanticSearch. Even PostgresSQL has full-text-search capabilities today and I think it will only be a matter of time before all of these integrate vector-based semantic search.

I will never tell anyone they are wrong because I have been wrong many times! However, right now I do not see KGs as the way of the future for knowledge-management in AGI. I do agree that asking the right questions and quality of search engine are critical for extracting knowledge from a database. Keep in mind we also have finetuning now. I suspect that we will see specific QA (question answering) versions of GPT-3 before long that are merely trained on large corpuses of facts, data, and knowledge.

I am presently working on a “Question Asking” finetuned model which could then be paired with the “Question Answering” finetuned models. I suspect this will be a far more powerful direction. This is just what I think based on what I know.

1 Like

Just so I’m clear the scenario is your prompt, as is “Moral Questions:” then all bulleted text is GPT-3?

I really like this idea of playing around with question generation.

You can condition GPT3 prompts on any external data so that it can incorporate that data into its responses. I just have an ensemble of different systems (KGs, Semantic Search, Generator). They are not deeply integrated with one another but exchange information.

1 Like

Moral questions are just one kind. You can also ask GPT-3 to generate legal, medical, scientific, emotional, personal, and scientific questions.

2 Likes

I think this distinction between philosophical and functional sentience is extremely important, both in pursuit of and in determination of success. I have a question about the thoughts you have kindly shared.

Humans don’t recall probably most of their subjective experiences. At least not consciously. Hyperthymesia is both rare, and often negatively impacts life. Unfortunately, it would seem to me that human memories are highly fallible. Sometimes it is important for human health to forget - i.e. pain, PTSD, information overload etc. Yet, I think most would agree we are sentient, and most people have some type of sense of responsibility or accountability (whatever that means to them). If you create something that remembers everything equally is that really the critical step toward functional sentience? Would not context, subjectivity, and standpoint be just as critical (if not more) in exhibiting sentient behaviour?

I do agree that memory is a key part of the puzzle. I think though, that these parameters of relativity (i.e. context/stand-point) are also important. With the “inner loop” are you referring to n-loop** learning (after Argyris and Schön?). I think that type of cybernetic approach would be highly useful; it would help connect memory with relativity concepts.

Anyways, super looking forward to your book @daveshapautomator :smiley:

2 Likes

One thing about any disruptive technology - especially anything that takes the mantle of “AGI” - it must be superior to humans in every single way before people will accept it as such. Remember that the working definition of AGI today is a system that is capable of any intellectual activity that a human being is. This definition is expansive, and thus includes phenomena such as perfect recall (photographic memory). So, yes, in essence I do believe that perfect recall is a necessary feature of something that claims to be an AGI - which is also definitionally required to be able to self-explicate as good as the best humans.

Consider the monk who has mastered mindfulness and has the most well-developed metacognitive skills and observations of any human ever. Such self-awareness is remarkable, but also would be a prerequisite ability for an AGI. Also, while humans might not have explicit declarative memories of everything, we use powerful deduplication techniques for our memories so that we can save space. Jeff Hawkins explores this extensively in A Thousand Brains and it is also further explored by David Badre in On Task. So it’s not quite fair to compare a human memory to a database.

I wasn’t familiar with Argyris but yes, his double-loop idea is similar to what I propose. I wasn’t able to find out who Schön was.

2 Likes

The main advantage of KGs is in their structured representation of data (concepts and relations), which is an essence of knowledge. This structure allows for reasoning. Another big advantage of the KGs is their verifiability, when you can easily find and check the reasoning pathways. That verifiability is missing from the current deep learning “black boxes”. So in short, I believe in Neuro-Symbolic approach to AGI :slight_smile:

2 Likes

OK, yes I accept that people want ‘better than human’ in their AGI. You probably say something to that effect in your book, I am just eager to engage with your ideas so am going by what is in this thread. I still think though, forgetting, or at least prioritising, memory is important if we want to see some kind of functional sentience that exhibits aligned and moral behaviour that we would equate with human value-sets. But I will look up those books you recommended, thank you for that.

Schön and Argyris worked together in the early days of n-loop learning.Donald Schon. Argyris was the psychologist and Schon the philosopher. They were inspired by people like Gregory Bateson, Norbert Weiner, Margaret Mead, John von Neumann, and Warren McCulloch. The ideas of loop learning branched to several different fields (though I would skip the 90’s business fad stuff).

2 Likes

I can see how that would work for some cases, but as Dave said it would be tough to map all the connections in something as big as GPT-3. This would become especially true when you are looking to understand the values/ethics that GPT-3 is drawing from.

1 Like

I see this scalability as rather “technical” problem. But even the existing Google Graph " answered roughly one-third of the 100 billion monthly searches they handled. By May 2020, this had grown to 500 billion facts on 5 billion entities.[[5]] Google Knowledge Graph - Wikipedia)"

2 Likes

Speaking of the devil, check this out: Building a Semantic Search Engine for Large-Scale Fact-Checking and Question Answering | by Chris Samarinas | Towards Data Science

I may have just found the optimal QA system for NLCA

1 Like

Great news! I’ve received the final proof and all looks good. Now to work out the last details and actually go live on Barnes and Noble… Stand by everyone, it’s coming very soon!

2 Likes