Research: Avoiding GPTs Hallucinations and Achieving AGI Through LCM Transformers

You might encounter the term LCM for the first time, and there’s a simple reason for that: I’ve just coined it. LCM stands for ‘Large Conceptual Model’. Unlike an LLM, which utilizes atomic neural elements as words at a syntactic level, an LCM employs atomic neural elements as concepts at a semantic level. Nearly 15 years ago, I wrote two articles that, if revisited today, could contribute to avoiding hallucinations and realizing genuine AGI, not merely its semblance.

The first article, at https://www.codeproject.com/…/The-Building-of-a…, introduces the fundamentals of computational predicate calculus. It addresses often neglected questions: What are concepts? What operations can we perform with them? How can we manipulate and represent them? These questions have been overlooked by conceptual science, which has traditionally focused on syntactic elements like words and letters. To fulfill our quest for AGI, I believe we must expand our current achievements and incorporate a final transformer to a different GPT, culminating in a neural network of conceptual elements.

The second article, at https://www.codeproject.com/…/True-Natural-Language…, demonstrates transitioning from a syntactic to a semantic level with the aid of a ‘Conceptual Dictionary’. It details how to deterministically transform syntactic forms into predicates (concepts) for further manipulation and operation. It also introduces concepts and code for deductions and inferences against a knowledge base of predicates.

Another intriguing aspect of these articles is their intersection with quantum computing, utilizing tri-value state variables (YES-NO-MAYBE) instead of binary ones, suggesting a potential convergence of two IT revolutions in the pursuit of AGI.

The Vision

Envision a scenario where an LCM transformer is integrated into the most advanced and aligned GPTs, where conceptual tokens like primitives and name-value predicate pairs become part of the neural network instead of syntactic elements like words. This shift would make knowledge fundamentally conceptual rather than syntactic, aligning closer with human cognition and potentially achieving AGI. This approach could also circumvent hallucinations inherent in language models, as it would navigate through a maze of accepted conceptual elements rather than a syntactic one.

Fifteen years ago, constructing a conceptual dictionary was a significant obstacle. However, with current GPTs’ proficiency in processing programming languages, this barrier might be surmountable. Imagine using GPTs to generate a comprehensive conceptual dictionary of the entire English vocabulary in just six months, leveraging their ability to understand and manipulate code. This breakthrough could dramatically accelerate our journey toward AGI.

1 Like