AGI is what we want, but not what we need for singularity

The most popular definition of AGI is the ability to perform any intellectual task that a human can do. While highly desirable, this may not align with what’s essential for achieving the singularity.

What we need is an AI that excels at a specific task: seamlessly marrying hypothesis generation with formal and inferential logic to unlock new frontiers in mathematics, science and engineering. Often, this is confused with human intelligence in terms of generalism, but I believe this is a misconception.

Generalism encompasses not only scientific acumen but also diverse skills like humor, intention reading, bicycle riding, and the ability to learn almost anything through multimodal transfer learning. Assuming that this level of generalization is necessary for AIs to make new scientific discoveries is just a guess, and probably a wrong one.

It’s conceivable that the intellectual prowess for scientific breakthroughs could be achieved without these additional capacities. Many aspects of mathematics are highly automatable. What LLMs have been doing is mathematically modeling language. Excelling at this task is possibly what we need to create scientific reasoning in computers, which would be more than sufficient for an explosion of discoveries and evolution on a singularity scale.

Interestingly, GPT-4 can solve complex problems that stymie many humans, yet it struggles with seemingly simple questions. This contrast highlights a bias in our understanding of what is “easy” versus “difficult”. Machines learn and operate differently from us, a distinction that is not a drawback but rather an advantageous feature.

I find it easier to create a funny joke than to solve problems from the Putnam Competition, but LLMs disagree. A squirrel finds it easier to remember the locations of hundreds of hidden nuts than to play tic-tac-toe, but I disagree. These multiple manifestations of intelligence are empirically evident.

It seems more likely that we’ll see groundbreaking AI-written papers before a Lord of the Rings-caliber novel from our silicon scribes. Not the most intuitive order, but arguably the better one.

I don’t have any qualms with trying to make AI as versatile as humans. The real stumbling block comes when we start judging or limiting AI based on how closely it mirrors us. It might work to some extent, but it misses the point of AI’s unique potential. Instead of judging it based on how closely it resembles us, let’s focus on unlocking its own distinct capabilities.

I’m also skeptical of the idea that “as long as AIs can’t do this or that, we’ll never have AGI.” My definition of AGI would be more oriented towards what’s needed for scientific and engineering discoveries. It’s more akin to an AI that is ‘sufficiently generalist’.

Thoughts?

3 Likes

Hey Natanael

Thank you for the interesting topic!

For starters it would be great to know which of the singularity definition do you use?

P.S. you may be interested in @gmkung 's library for improved logical reasoning: Extending ChatGPT's deductive and logical reasoning capabilities - #25 by TonyAIChamp

Einstein is reported to have said something like ~" the most important faculty for discovery is imagination"
me - yup, Otherwise you are just stuck in A* search forever.
Of course, ‘imagination’ is one of those words many philosophers would have us abandon, since it is so notoriously undefined.
hmm.

fwiw, I’m working on exactly that problem. (discovery), currently having my bot/agent/… ‘digest’ hundreds of papers around a specific topic in early stage cancer detection. Love to hear more if you are doing active work in the area.

1 Like

This is an interesting approach. I believe that GPT-4 has great potential in generating hypotheses and inferences. One of the tests I have been doing is to work with different ToT prompts on mathematical questions that GPT-4 cannot solve.

However, a bottleneck of these current systems is the abrupt decline in reasoning quality as input length increases. To make discoveries and gain insights, it’s necessary to remember what has been seen and analyzed, so that the generation of hypotheses and the relationships between data become increasingly interesting. At this point, the fact that LLMs only reason well with short input lengths hinders the process.

In my spare time, I’m working on designing an architecture based on biological neural nets. I appreciate Numenta’s approach, yet I’m seeking more accurate replicas that don’t rely on temporal aspects like Spiking Neural Networks (SNNs) or Neural Engineering Framework (NEF). Research into various animals’ connectomes shows that each neuron functions like a mini-brain, with different neuron types specialized for various tasks. Biological brains have ganglia and complex connections beyond simple feed-forward and feedback mechanisms. My passion lies in capturing this complexity in artificial neural networks. My initial venture in this field is developing a neural network modeled after C. elegans, a simple yet thoroughly studied organism with a fully mapped connectome.

However, I believe in multiple paths to achieving intelligence, as evident in the animal kingdom. While most animals possess neural structures centered around a main processing unit controlling smaller ganglia, octopuses present a unique case with their highly decentralized network. Interestingly, octopuses evolved differently from more traditionally intelligent species, like mammals, yet they demonstrate remarkable cognitive abilities.

Sure. Multiple paths. airplanes don’t flap their wings.
Personal opinion: too many of us are stuck on the idea that ‘processing’ is all.
I think memory architecture is the most important ignored area at the moment.
Human conscious working memory, according to the old trope, is 7 +/- 2 items.
Should fit in 32k tokens, no? :slight_smile:
Now what?
RAG as currrently conceived is a 3 year old tentatively touching its toe at the edge of the ocean.

Yes, I agree that memory is one of the most relevant aspects. Probably something deep in the architecture of transformers needs to change for this element to start existing in LLMs

Beautifully written.

Extra sentence because OpenAI won’t let me simply compliment anothers writing.

disagree, i had gpt tell me this polarbear penguin joke and the punchline is “You look like you blew a seal” It was hilarious. “blew” and “seal” were the double entendre word in the punchline. I thought it was pretty good. So - it CAN make funny jokes

Definitely, sometimes funny things come out. But considering everyday conversations, ironies and spontaneous jokes are a natural part of our interactions. This kind of humor comes effortlessly to humans, but it’s a challenge to replicate authentically in a machine like GPT. Understanding when to deliver a joke or irony that resonates with someone else is almost an intuitive skill for us, yet it’s hard to convey to a machine. Meanwhile, the machine excels at processing complex content that can be challenging for humans.

1 Like

The term ‘singularity’ was popularized by Vernor Vinge in his 1993 essay. Interestingly, he predicted that by 2023 machines would become smarter than humans: https://ntrs.nasa.gov/citations/19940022856

The main concept is about the impossibility of making reliable predictions about the future since superintelligences would be acting.

The point I mentioned in the first post is about reaching a rate of evolution and discoveries so great that it’s difficult to control or keep up with. A hypothetical scenario where an AI is capable of:

  1. Initiating a project to improve a paper or solve a specific problem
  2. Searching for relevant theoretical materials (books, papers…) and extracting the most useful points
  3. Creating hypotheses
  4. Searching for other materials to validate its hypotheses
  5. Creating more hypotheses
  6. Searching for more materials

    performing these actions in an unsupervised manner, with powerful tools at its disposal such as code generation/testing, library installation, etc.

How far are we from this? Very close. It just requires an improvement in logical reasoning ability, especially when the input length is large, in a way that allows for forming a large useful memory that can be consumed with high criticality, and the system would work.

When this occurs efficiently enough that the final result is a truly improved quality paper or a practical acceptable solution, this generation of ‘new knowledge’ would enter the database, and could be consulted in a future task.

This is the aspect where things get out of control, as while humans would try to read and learn what is being generated in terms of new knowledge, AIs would already be using this new knowledge to obtain further knowledge. Fields like mathematics, cryptography, computer science/software, which depend less on external elements, would be the first to reach singularity.

In a second phase, the knowledge obtained in these areas would be used to enhance existing concepts in physics, chemistry, and biology. All still done on theoretical ground.

For even more relevant advances, practical experiments with physical materials would be necessary. This step is slower. But the intelligence of AIs would help speed up this process too.

In engineering, it’s the same thing. AIs would propose the creation of structures and materials that initially humans would need to build. But these AIs would quickly also propose the creation of robots to undertake such construction projects.

Although there is a physical bottleneck between theoretical discoveries and application in everyday life due to the need for physical constructions, the singularity in the theoretical field alone already has the element of unpredictability, and could also represent the moment when AIs create better AIs.

It’s cool to speculate about all this and, obviously, it’s hard to imagine a scenario like this where AIs wouldn’t be capable of performing any task that a human can. However, my argument is that the onset of the singularity isn’t contingent on this capability. The first part of the process that started the singularity could be an unconscious AI, not as generalist as a human, but sufficiently skilled to perform that specific intellectual task effectively.

…your mom excels at processing complex content that can be challenging for humans.