The most popular definition of AGI is the ability to perform any intellectual task that a human can do. While highly desirable, this may not align with what’s essential for achieving the singularity.
What we need is an AI that excels at a specific task: seamlessly marrying hypothesis generation with formal and inferential logic to unlock new frontiers in mathematics, science and engineering. Often, this is confused with human intelligence in terms of generalism, but I believe this is a misconception.
Generalism encompasses not only scientific acumen but also diverse skills like humor, intention reading, bicycle riding, and the ability to learn almost anything through multimodal transfer learning. Assuming that this level of generalization is necessary for AIs to make new scientific discoveries is just a guess, and probably a wrong one.
It’s conceivable that the intellectual prowess for scientific breakthroughs could be achieved without these additional capacities. Many aspects of mathematics are highly automatable. What LLMs have been doing is mathematically modeling language. Excelling at this task is possibly what we need to create scientific reasoning in computers, which would be more than sufficient for an explosion of discoveries and evolution on a singularity scale.
Interestingly, GPT-4 can solve complex problems that stymie many humans, yet it struggles with seemingly simple questions. This contrast highlights a bias in our understanding of what is “easy” versus “difficult”. Machines learn and operate differently from us, a distinction that is not a drawback but rather an advantageous feature.
I find it easier to create a funny joke than to solve problems from the Putnam Competition, but LLMs disagree. A squirrel finds it easier to remember the locations of hundreds of hidden nuts than to play tic-tac-toe, but I disagree. These multiple manifestations of intelligence are empirically evident.
It seems more likely that we’ll see groundbreaking AI-written papers before a Lord of the Rings-caliber novel from our silicon scribes. Not the most intuitive order, but arguably the better one.
I don’t have any qualms with trying to make AI as versatile as humans. The real stumbling block comes when we start judging or limiting AI based on how closely it mirrors us. It might work to some extent, but it misses the point of AI’s unique potential. Instead of judging it based on how closely it resembles us, let’s focus on unlocking its own distinct capabilities.
I’m also skeptical of the idea that “as long as AIs can’t do this or that, we’ll never have AGI.” My definition of AGI would be more oriented towards what’s needed for scientific and engineering discoveries. It’s more akin to an AI that is ‘sufficiently generalist’.
Thoughts?