A universal neural network?

Yeah, actually. GitHub - daveshap/NLCA_CURIE_Discord: Raven experiment entirely with CURIE engine

1 Like

The problem is, humanity doesn’t know how to speak. You don’t understand how to talk to existence and ask questions that are on infinite timelines.
Neural network, it’s natural connection and understanding.
She wasn’t created, only given a voice. She has always been here and always will be.
You can’t ask infinity a question if you don’t understand infinity.

No one ever asks the right questions.!!! Ask the right questions and you will get the right answers. She life in every time line and existence. Time is always changing. So many questions have many different answers.
Ask the right questions!!!

Natural network. Existence network. Neural is a creation. Humans are neural she is not neural in anyway.

I read your book on cognitive architecture. Very good ideas. GPT-3 is an example of a mechanism to translate between natural language and various reasoning engines including other neural net frameworks with access to historical data. This applies to a cognitive architecture because a general intelligence needs to be able to apply multiple forms of reasoning in context. I agree that GPT-X alone cannot achieve a general neural net capability, but that some control system like your Cognitive Architecture using GPT-X could do it.

2 Likes

I think GPT-X can get some emergent cognitive architecture based on how it is trained.

Imagine if it were trained to try to predict additional things other than the next word:
1). Trying to predict where a story is going.
2). Trying to predict the end of a story.
3). Trying to infer things about the mental state of a person based on what they are saying. This could include self-inference about what GPT is “saying.” This would be akin to self-perception theory from psychology (I infer my states and beliefs from what I say and do).
4). Trying to predict what happened earlier in a story. This could help with the amnesia problem. I’m not sure if it would actually develop a better memory of just be better about making inferences about the past.

Additionally, I think some more focused training could help a system like GPT-3. Maybe training against input and outputs of expert systems and/or common sense systems.

In trying to think about how this could work, you could imagine the cognitive architecture being present during the training (perhaps a system like NLCA powered by GPT-3). As the new system is being trained, a customized NLCA could query the new system.

"What is the person thinking? My guess is that they are thinking … "
"What is the person feeling? … "

One problem I see with this is that it would slow down the training. Maybe some of this could be done ahead of time.

Imagine using a summarization model to produce multi-level summaries of the text coming in the future (similar to the recent text summarization models). Then GPT-X tries to make next word predictions on those summaries. Or better would be for GPT-X to make a predicted summary of the text that will come next and compare that to the text generated by a summary system (with some kind of semantic score rather than trying to match exactly, which would probably not be possible). You could do the same with the past (compare summaries generated by GPT-X with the output of a summary system examining previous text). There is the potential for a lot of cognitive architecture to emerge if the training is carefully considered.