What if LLMs turn out to be better then AGI?

Hear me out.

What if the description of AGI is actually way slower and less useful then LLMs turn out to be?

I find it completely fascinating how much I watched BERT improve, then gpt-2 and now we are at gpt-5.3-codex with tooling architectures just 4 or 5 years later!

And, when we look at the late 70s to 00s era: Biological inspired artificial intelligence was leapfrogged by neural networks. Neural networks are simple, easy to scale and became very popular by how useful they were/are!

So, are we witnessing that again but with LLMs vs the definition of AGI?

If that is the case, will the scientific community agree on calling LLMs as AGI?

Would OpenAI shift their priority of building AGI?

Not sure this is an interesting aspect to anyone, but it sure is to me!

1 Like

AGI is a pipedream under currently available architectures and equipment…

If that ever gets out, there go the stocks tho…
LLMS might be the best we have for some time still.