VERSES Declares Path to AGI, Now What?

Wondering what the community thinks of this …

First, the OpenAI charter has an “assistance” clause, that states (ref).

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

So guess what, a company, VERSES, is now declaring a path to AGI (ref). And is requesting OpenAI to basically follow its charter and quit their own AGI development and assist them.

First, I was a bit taken aback that OpenAI had this clause in their charter, but now it looks like there is one company claiming they have a path to AGI.

What should OpenAI do here?


From their declaration it seems more like hype-hunting.

Among other red flags:

The fading confidence in their viability as the foundation for AGI, as expressed by Mr. Altman above, is further supported by recent research highlighting the limitations of LLMs — and by a growing consensus in the AI Industry.

Have I missed something?


Hype-hunting or not.

WIRED recently did a big write up on the guy behind VERSES, Karl Friston, and so I am thinking that there is some credibility here and also a “better-than-even chance of success in the next two years”.

1 Like

Agreed, I’ve taken a bit of time to dig into what they think is their “big breakthrough”, and it seems to be based on this:

Which isn’t a new thing, it’s literally based on math that was invented by a guy who died in 1761

Tldr.: it’s cool but computationally expensive :rofl:


I don’t say it is hype only. They probably have something, but this something is probably multitudes less than what they make of it and frankly I doubt is worth OpenAI’s attention.

Any breadcrumbs to follow here? Papers? Links?

One of the top hits in google now links to this thread :rofl:


He’s curtainly a very intelligent fellow, no questions about that! But it seems like the interest surrounding him peaked around 6 year’s ago, when some bookmakers tipped him as a possible noble laureate.

The noble prize is given to science that has made an impact, it’s worth noting that laureates wait an average of 22.3±10.8 years between conducting their prize-winning research and receiving the Nobel Prize, and it’s usually an the longer end for this type of research.

Tldr.: he never became a noble laureate but got other fancy awards, but his research was mostly relevant 20 years ago :laughing:

Here’s the earliest article about the subject:


It smells like hype for sure …

So wondering what the hard data shows.

However, also wondering what folks here think about the “assist” clause in OAI’s charter? It threw me, to say the least.

But also, this clause is inviting any company that thinks it has a pathway to AGI to declare this loudly. For example, VERSES just put out their open letter as a full page ad in the New York Times recently (ref)


and exactly this would not be necessary if a product was close to achieving AGI. Instead of a paid ad, a product demo showcasing superior potential would be the real deal.


It seems like a noble thing, but in reality it will create the company all kinds of problems that will eventually make this clause’s impact the exact opposite what it was intended to serve.

I agree with the sentiments above, but the latest papers seem to be more relevant to their AGI claims. Luckily I can burn my last few IEEE downloads this month to grab them.

Never thought I’d see TQFTs again. :rofl: If anything, this makes for interesting reading.

Fields, C., Fabrocini, F., Friston, K., Glazebrook, J. F., Hazan, H., Levin, M., & Marcianò, A. (2023). Control Flow in Active Inference Systems—Part I: Classical and Quantum Formulations of Active Inference. IEEE Transactions on Molecular, Biological and Multi-Scale Communications, 9(2), 235–245.

Fields, C., Fabrocini, F., Friston, K., Glazebrook, J. F., Hazan, H., Levin, M., & Marcianò, A. (2023). Control Flow in Active Inference Systems—Part II: Tensor Networks as General Models of Control Flow. IEEE Transactions on Molecular, Biological and Multi-Scale Communications, 9(2), 246–256.

1 Like

I’ll dig in later/tomorrow, but I’m not expecting much until I see this:

1 Like

Just FYI, if you are ever looking for a paper you don’t have access to, let me know, there’s a good chance I can grab it for you.


Very interesting discussion.

As they are saying that “we have preliminary evidence to qualify and justify our claim”, I believe OpenAI should investigate.



I did a quick search and found this:


“Genius” product presentation webinar:

About Karl Friston, from Wikipedia: "In 1996, Friston received the first Young Investigators Award in Human Brain Mapping, and was elected a Fellow of the Academy of Medical Sciences (1999) in recognition of contributions to the bio-medical sciences. In 2000 he was President of the international Organization for Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of the Royal Society in 2006 and received a Collège de France Medal in 2008.

He became a Fellow of the Royal Society of Biology in 2012, received the Weldon Memorial Prize and Medal in 2013 for contributions to mathematical biology and was elected as a member of EMBO in 2014 and the Academia Europaea in 2015. He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award from the Organization for Human Brain Mapping. He holds Honorary Doctorates from the universities of York, Zurich, Liège and Radboud University."

Karl Friston’s profile on Google Scholar:

Papers published by Verses AI:


I’ve had dealing in the past with Gabriel Rene, the CEO of this “AI” company. I’d be fine with never dealing with that marketing person ever again.


So…from my limited brain.

I feel like that clause is there as to say “If you are sufficiently wiping the floor with us, we will admit that and join you, we’re not trying to add tension to the situation of trying to successfully deploy AGI”

I think they’d need some tech and a roadmap that OpenAI couldn’t develop themselves.

Much more likely for this to happen with Mistral or Anthropic


What is this ad :rofl:

Definitely feels like a “any publicity is good publicity” marketing stunt. I don’t know why, but I have a strange feeling that the discovery of AGI wouldn’t require random billboard ads and “pls notice me” letters :rofl:


Sure, there are many red flags. Sure, it is highly unlikely that this company actually has an AGI coming out of the oven.

But putting immediate skepticism aside, how often does a company whose technical staff is led by someone with a reputable background like Karl Friston reach out claiming they have solid evidence to show a different approach to achieving AGI?

It doesn’t seem like it would cost much to request a 1-hour pitch to show results and “prove” that it’s worth the involvement.

Maybe the technical team is good and the marketing team is terrible :joy:


There is a strong counter-argument against it. This will set a precedent.