I am pleased to announce that my book is finally on sale! AMA
Great achievement! Although I couldn’t order neither option to Europe from Barnes&Noble. It would be superb feat to be able to buy the book in Europe.
Still reading it, couldn’t help but GET it
Right now I’m at the very beginning and it already looks intriguing!
Yes I’ve heard from many people that I need a solution for Europe. Working on that now…
Okay since the EPUB is free and B&N does not have any exclusivity requirements I’ve just posted the files on my site. Here you go!
Just scroll down to the links and get either the EPUB or PDF directly!
ALSO - I am working on public demo code. I realized my research code was a HOT MESS so I will not be sharing that version exactly However, the public version of the code will be cleaner and more readable. It will also be under the MIT license so everyone can use it however they like within the bounds of that license. I’ve already done the bulk of the coding, but I have some prompt engineering to do as well as debugging. The NLCA public demo code should be available within a week or two, certainly by the end of August. I’ll also put up some companion YouTube videos to (1) explain the code and (2) demonstrate that it actually does what I say it does.
Now, before anyone complains that this is not a “full AGI” - yes, I know. It’s a prototype. There are still numerous cognitive abilities that I haven’t fully worked out, such as object permanence. I have ideas about that one… More important cognitive abilities, such as gating should probably come first. What is “gating” you ask? Gating is, broadly speaking, the ability to hold your actions until the right moment, or cancel them entirely. Let’s say you’re at a party and you get an idea for a clever joke but then another part of your brain warns that your cunning witticism actually won’t land well with the current crowd… How do you know whether or not to say it? Furthermore, how do you ensure that you don’t say it by mistake now that it’s in your head?
Technically speaking a lot of this is already covered in my design… with a big asterisk. NLCA can think about actions with the inner loop and create dossiers based on evaluations, but those evaluations may or may not contain the necessary information. After all, many humans overlook simple things, forget certain social graces, and so on. So why shouldn’t we give a newborn AGI the benefit of the doubt? Well, we should give it some time to mature and evolve but ultimately we want an AGI that is able to deliver Shakespearian zingers on the regular.
Some of this will be addressed with my finetuning experiments. Basically, instead of relying on prompts, which I do exclusively in the book because the finetuning endpoints weren’t out yet, I think that NLCA will get far better performance with finetuned models. For instance, I’ve been working on training sets based on the movie dialog corpus, so that NLCA will have a masterful understanding of conversation as well as asking the right questions (internally and externally) to deconstruct the state of a conversation. Ideally, NLCA will soon possess more social graces than most of us mere apes. That sort of thing.
In the same way that humans can practice witty repartee, so too can NLCA with the right datasets. But this is just one ability, and it’s a bespoke solution! Surely this counts against it as an AGI, right? Does it though? How much work do humans have to put into learning to speak formally, casually, and in any number of situations. Would you be comfortable at a cocktail party of foreign dignitaries? Most people wouldn’t. NLCA, on the other hand, can theoretically master any dialect or speech pattern in an afternoon. That’s what I aim to prove with the finetuning experiments, but FIRST I must get the demo code ready for everyone to act as a companion to the book.
So generous to share your work this way. Thank you.
You’re welcome! I just hope my work helps make the world a better, kinder, safer place.
Okay gang, here’s a demo video:
And here’s the code:
Be forewarned, CURIE is not sufficient for good performance. This is just a minimalist demonstration of the architecture. Curie is way too dumb for full AGI, but it is fast and cheap. Also, this is using SQLITE instead of something more powerful like SOLR. Again, this is for demonstration purposes. SQLITE is ubiquitous and easy to understand. Document search engines and indexers like SOLR or ElasticSearch are a bit more arcane.
This repo is mean to be an “all-in-one” training wheels version of NLCA. I will soon start breaking the microservices up into their own repositories for easier development and tracking.
Also, don’t forget that the book is free as an EPUB or PDF. The paperback is $7.95 here: https://www.davidkshapiro.com/nlca (the download links on this page should work globally)
Here’s my roadmap:
- Break up the microservices into their own repositories
- Switch back to a more sophisticated document search tool like SOLR
- Use finetuned models instead of prompts for superior performance
There’s a lot of work to do!
Is there a reason you were using base Curie and Davinci instead of their instruct versions?
Yes, there are a couple reasons.
- Cost. Curie is 10x cheaper than Davinci
- Portability. I wanted to do an experiment with generic/vanilla transformers rather than bespoke solutions. There are notable new transformers coming out such as GPT-J, Jurassic, and AI21. If I built a solution that was specific to OpenAI, it would constitute vendor lock-in.
I am presently working on replacing prompts with fine-tuned models. Those fine-tuned models are trained on custom datasets, and those datasets should be completely portable as other vendors figure out fine-tuning. I know it’s a bit ironic - I wanted to avoid bespoke models and ended up creating highly specialized models. But again, the point is transparency. I have no idea how INSTRUCT models are created or trained, and I may not ever know as those could be trade secrets. I suspect they are merely fine-tuned on such tasks.
The first such fine-tuned model is my question generator.