I’ve been hard at work ramping up on completing version 1 of Raven - a fully realized ACE (artificial cognitive entity). This is a newer/more sophisticated cognitive architecture than that I proposed in Natural Language Cognitive Architecture. NLCA was based on 2 loops with a shared database (or nexus) but the current architecture is much more complex. It is so complex, in fact, that it has to be implemented as a series of microservices. I initially came up with the idea of microservices to achieve full artificial cognition a few years ago but (1) only GPT-2 was available and (2) I had a lot to learn about neuroscience and how to use LLMs. Well, both problems have now been solved! So I am revisiting my original architecture, MARAGI (microservices architecture for robotics and artificial general intelligence).
Below is a grossly oversimplified network diagram of MARAGI. At the heart is the Nexus, which is also the first microservice I built. The aptly named nexus serves as the heart of the ACE. Its primary responsibility is to hold all the memories and knowledge of the entity. This includes episodic and declarative memory, which means that it has several search/recall/fetch functions built in. There are two primary modalities that human recall works: associative and temporal. In other words, our memory works either by reminders (this thing reminds me of another thing) or by time (relative sequence of events, relative age of memory). There are a few other implicit modalities of recall but these two cover the vast majority of memory functions.
I’ve got a working instance of the Nexus here: GitHub - daveshap/Nexus: Stream of consciousness nexus REST microservice
It is a RESTful microservice that is braindead simple to use!
The next thing I did was work on a text-based simulation. This is so that there will be a virtual world for me to test and improve Raven in without risk. No robotic body, no access to the internet. Just a lonely ACE wandering in its own dreamland. I just figured out the text-based simulation this morning so I will encapsulate it in a microservice soon. Basically, this simulation microservice replaces what will eventually be a sensor microservice that handles all input. The sensor simulation service is here: GitHub - daveshap/SensorSimSvc: World-state and sensor input simulation for ACOG/Raven projects (keep in mind its not yet fully functional, but the test script demonstrates how it works)
Finally, I also resurrected my embeddings microservice. I’ve been a fan of Google’s USE for a long time (sort of a progenitor technology to GPT). This is probably the simplest microservice you’ll ever see. I would even call it a nanoservice. You send it a list of sentences and it sends back 512-dimension embeddings. I upgraded this service to USEv5 Large and it is now hella fast (0.02 seconds for 3 embeddings). So this is reaching parity with the speed of human thought, although the resolution is much lower. I suspect that human thoughts/memories/embeddings require millions (or billions) of dimensions to fully represent. I could be wrong though. The semantic embedding microservice is available here: GitHub - daveshap/SemanticEmbedding_Microservice: REST API microservice for handling Universal Sentence Encoder
Now here are the videos I made documenting this work:
And a bonus podcast livestream I did last night: