VERSES Declares Path to AGI, Now What?

if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project

a typical triggering condition might be a better-than-even chance of success in the next two years

OpenAI should have established clearer rules such as specific benchmarks of results achieved.

This can still change to avoid spam.

1 Like

Agreed. But maybe their filter is a proper research paper rather than just public claims :slight_smile:

The video I shared above and some papers give a good overview of their work. Maybe it’s enough for OpenAI to evaluate. I looked at the results superficially and wasn’t impressed, but I may have missed something.

1 Like

@curt.kennedy have a look at the papers that their R&D director was involved with, looks promising to me, the guy is a beast ‪Maxwell J. D. Ramstead‬ - ‪Google Scholar‬

While we’re at it, I think it’s worth noting that there’s some parts of these proposed methods are under critique like this:

The line of reasoning of [6] therefore does not support its claim that the internal coordinates of a Markov blanket “appear to have solved the problem of Bayesian inference by encoding posterior beliefs about hidden (external) [coordinates], …”.

1 Like

This clause may just suddenly disappear overnight.

1 Like

After contacting VERSES, and providing them with details about how to improve their systems, I decided that I would post those hardware setups Plan B and Code LLM usages that anyone can implement to create AGI+. University material papers, hardware setups, and build structures from around the world. I believe this is something anyone can use, and it should be shared. 1) 50 Ghz per second photonic chipsets, glass substrate boards, 2) MnPd3 SOT MRAM, universities of AUTH, Princeton, MIT, California, 3) CERN institute algorithms that were given to me, 4) PACE super computer inferences from Georgia Tech, 5) NIIST (Japan) 22 Petabyte per second single fiber optic wire, 6) 400,000 videos at 17TB per second simultaneous analysis on a single chip, 7) polariton cavity capture on a SC circuit device, and finally, 7) a sensor device that is able to pick up in realtime, nanoscopic molecular data from area. 8) Brain reading through EEG and fMRI in real time decoding. I hope you guys can understand the importance of these december and november productions.
All of this is put onto a proposed Architecture on a Brazillian SC. I imagine it will be run in Wellington in the next 6 months as the architecture builder is in contact with the community there.

1 Like

That’s sounds like a great, “Let’s grab a beer” conversation!

1 Like

I’ll be in Miami at some point this year, I’ll have to pop on over for that beer!

1 Like

Hi there, can somebody help me out in understanding what exactly needs to be achieved in claiming to have AGI?
If the Turing test is the bar, GPT-4 is an AGI. If not what is missing?
Thanks a lot

Pretty interesting, will check this closely this guy is not a nobody for sure.

There doesn’t seem to be a consensus definition of AGI.

But the definitions batted around are “better than the median human”, or even “doesn’t hallucinate and is reliable”.

Then there is SMI, or super machine intelligence, which goes beyond AGI, and is essentially smarter than the smartest human.

But no mention of the Turing test as a respected benchmark of intelligence, probably because it’s been surpassed, and we need more than the simple Turing test to have decent AI.

1 Like

I listened to a great episode of Sam Harris’s Making Sense Podcast where he interviewed Dr Shamil Chandaria. He spoke very eloquently about the Bayesian Inference model of brain functioning. He recently did a talk at Oxford Uni, titled “Could AI be conscious?”. Google it. You might find this of interest.

1 Like

@curt.kennedy Ha ha ha - so it is more a discussion about “my car looks nicer than yours, screw horsepower, torque, speed, consumption, acceleration, size… it is just ‘nicer’ and that is what counts”? I guess as long as the “I” in AI, AGI, ASI/SMI/… is undefined, any discussion is a nice beer chat but nothing more. Isn’t it?
@simon.brookes Hi Simon, welcome to the community. Yes, the Bayesian Inference is an interesting aspect, probably one of a hundred aspects in considering an intelligent action. Consciousness is another one, modalities of ethics my be another one, all kinds of different cognitive abilities, how about moral congruence, how about empathy as a driver for intelligent actions and how would it drive a bias…
I believe: As long as we can’t come up with a list of “features”, “abilities”, “cognitions”…that describes HUMAN Intelligence as a MASTER definition, we cannot define an ARTIFICIAL Intelligence in relation to Human Intelligence.
Moreover, once we better understand the neuronal functionality of our brain, like how 200 Million nerve strands of the Corpus Callosum can simultaneously fire at high speed and do things that would require a Quantum Computer for AI to catch up with the brain, we cannot really make any comparison.

2 Likes

"if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project”

"a typical triggering condition might be a better-than-even chance of success in the next two years”

Key phrase “comes close”. Having a good theoretical plan isn’t “close”.
“chance of success in the next two years” - this would require a working, validated model, probably tested by the masses as GPT-4 has been, with widespread consensus on the probability of success.

The whole premise is that AGI is so important that other AI models would be abandoned in favor of supporting the AGI effort. I’m not sure that is the case. There is also a subtle premise that there is only one “kind” of AGI? I’m not sure that will be true, or desirable if not true. Is each human not a somewhat unique AGI? Yes there are commonalities and the wiring is the same. But if AGI could be achieved in different ways (whatever AGI means), would there not be value in competing approaches?

2 Likes

Î’d say there already are competing approaches considering that each team developing AI models has their own secret sauce that they apply and (don’t) publish.
And looking at the closed source models these apparently perform better and are already nerfed for the general public.
I would expect that the human drive to stay in control will also lead to different versions of AGI instead of all of mankind handing control over to a single team responsible for the state of the art models.

1 Like

I would say we are in the very early days of the “serious AI” space, as most of it is consumed with generative AI and LLM’s.

Since language is something we can relate to, there are A/B comparison tests you can participate in to determine who is leading in this space, for example, this:

As for things that are more algorithmic, and maybe less subjective, like embeddings, you have benchmarks to shoot for like this:

As for AGI, I feel it could be a hybrid of subjective, and algorithmic.

Subjective would be similar to how the AI model “talks”; and algorithmic in how the model is “smart” and responds to things like standardized tests.

So I think AGI will have to wow us with its personality, and impress us with its knowledge.

Will TQFT based theoretical Bayesian networks from VERSES researchers do this?

Maybe, but like others mentioned, we need models and real tangible things that both the people, and the algorithms/tests can validate and verify, to even start taking it seriously.

Can you point to any sources confirming it passes the turing test? Repeat the same thing to it over and over and see if its “aware” that you did. I think when people mean AGI, it’s two fold…

  1. You have to be able to send it it to do specific tasks. like “gather me an email list of physicians in the chicago area, and than display them in a sheet, then when you’re done with that, input that data into my system here”.

  2. It has to have memory and a dynamic sense of humor

Once people have workers you can not only tell what to do, but also joke around with you, thats whne I think people will say it.

Oh one more, so the third…

  1. It has to lose this “tit for tat” conversation style. it has to be able to initiate conversation in a meaningful way, and sometime humorous way.

So you might be saying “GPT could probably already do a lot of these things”

You’re right, in great irony, when it happens it will be the little things that give us that feeling that GPT is “sentient”, not the big things like metacognition or “thinking about thinking”. No, it will be the way it idiosyncratically responds to nervous stimulus and does some behavior like clockwork exactly as a human would. Or it will be its ability to talk about from the first person these little things in life, it will come down to some aspect of philosophy, in other great irony.

2 Likes

We are already very familiar with Human GI, we deal with them every day. I really don’t think anyone will notice or actually care much other than if it affects them directly.

AI’s are already many times better than humans are general knowledge and we all seem to be dealing with that fairly well. Sam Altman’s definition of AGI is loosely when AI is as good as a median human, I think that is something that might be a quantifiable thing using standardised examination testing metrics, give the same tests to a person and an AI and when the AI is scoring roughly equal to the human… there you go.

The discussion is a community response to VERSES, a company declaring a path to Artificial General Intelligence (AGI). curt.kennedy highlights that VERSES asks OpenAI to effectively stop their own AGI development and assist VERSES, as per OpenAI’s charter. The move prompts various thoughts about whether VERSES is genuinely close to AGI or if this is just seeking attention. Multiple users, including TonyAIChamp and N2U, express skepticism, calling it hype-hunting and raising questions about the underpinning Bayesian model.

Despite the skepticism, some users like elmstedt, natanael.wf, and curt.kennedy suggest that VERSES deserves investigation, particularly because of its respected technical staff led by Karl Friston. They recommend a deeper look into the published papers and product demonstrations. However, users such as RonaldGRuckus and vb question VERSES’ publicity strategies, arguing that a product close to AGI wouldn’t need such initiative.

The conversation also touches on concerns about the “assist” clause in OpenAI’s charter. For instance, TonyAIChamp, natanael.wf, and jeff8 note potential issues such as causing problems for OpenAI, the ambiguity in the definition of “comes close to AGI,” and the setting of a tricky precedent.

Intriguingly, plasmatoid shares their contact with VERSES, briefly outlining a series of advanced hardware and specialized algorithmic interventions that could lead to AGI. The thread concludes with a discussion on the definition of AGI, with inputs from curt.kennedy, tventura94, and Foxabilo, who stress on the significance of memory, humor, initiation of conversation, and subjective human-like responses in the discernment of AGI.

Summarized with AI on Jan 2 (GPT-4-32k)