What is Q*? And when we will hear more?

That’s a biological rabbit hole, but somewhat on topic still, being about learning and anthropic principle to a degree. Once the first single neuron “brain” formed in nature by accident (which need be nothing more than a charge carrier from a photon detector to a muscle-like cell, or any cell that can change shape based on a charge arriving onto it) the LLM training began, and every configuration that swam towards light succeeded by staying near the surface of the ocean, and the ones wired wrong, swam deeper always and froze or starved.

So the “training” for biological neural nets is the environment itself, and the fact that the best connectome designs outcompeted the rest.

EDIT: But I think the Qualia in brains is the “Waves”. So to create “Qualia” artificially we’d need computer ‘wires’ that exist in 3D space and function like brains by having a 3D shape in which these waves can propagate, and generate Maxwellian effects which the Qualia emerges from.

1 Like

Ok. But my point remains. Why? If qualia are necessary for ‘AGI’ (questionable), then they can’t be mere epiphenomena. If they aren’t, then why did they evolve?
I know, not much to do with Q-learning. Sorry. Again ref back to the Berkeley paper about zero-shot goal seeking…

Most experts in the field of both neuroscience and computer science believe consciousness is indeed an epiphenomena. And as for “why” any particular thing in nature ever evolved, the answer is always the same which is: “Because all the things different from that died.”

“Because all the things different from that died.”
then it isn’t an epiphenomena, right?
“Most experts” are missing the obvious because they are stuck in classical physics.
Many experts even deny qualia to avoid facing the contradiction.

What do you base your trust on? Cause theoretically, Id agree with you about the significance on taking precautions. But dont’t you think that we as humans are already going through an unstoppable rush? A rush for more, faster, more efficient etc
Im worried that experts will break under the pressure of politics, economy etc which and will unleash a power that wont be containable anymore.

Thats right. People have been forcasting that for decates.
One cant say much about specific achievements and their “release dates” but for decates it has been common knowledge that the time frames for the next advancements have been getting shorter and shorter.
Im reading Epsteins book on the Singularity, very fascinating and mentions exactly that.

Yeah software (including LLMs) is still stuck in the classical world of connectomes and synapses (so to speak) whereas in the field of biological neuroscience I would contend that it’s already proven that it’s not the connectome (or even the synaptic firings) that directly create consciousness, but it’s the QM “fields” created by motion of those charges that correlates with consciousness.

It has been experimentally proven that these field potentials has a stronger correlation with state of mind and memory, than the synaptic activity does.

1 Like

My trust in OpenAI is rooted in their commitment to regular updates and their dedication to pursuing the highest standards of excellence in their work. It is an undeniable fact that even the brightest minds can occasionally make mistakes, but OpenAI’s proactive approach to minimizing these errors is commendable.

It is crucial to recognize that no system can be entirely impervious to flaws; however, this should not deter us from progressing in the realm of AI. The potential benefits that these advancements can bring to our world are substantial.

In my view, OpenAI is handling the transition with commendable expertise and diligence, considering the complexity of the task at hand. Their efforts are instrumental in shaping the future for the better. While trust is pivotal, it should be complemented by rigorous oversight, continuous improvement, and transparent communication. This ensures that the evolution of AI remains on a responsible and beneficial trajectory.

1 Like

If you can’t do simple reasoning, I’m sorry, but you’re not general intelligence. The fact that you can, for example, remember the content of a thousand books or pieces of music and then play them on the piano, or multiply 532512 x 122745 in your head, does not mean that you are generally intelligent. You simply have a good memory or the ability to perform calculations - maybe you could say that you have narrow intelligence but not general intelligence.

1 Like

Most researchers believe LLMs are already doing “reasoning”, and I fall into that camp. And I’d even call it “generalized” reasoning.

The fact that the models fail at some easy tasks is just an artifact of the fact that we don’t really understand the emergent phenomena yet, so we can fix it. As you may know, this exotic “emergent reasoning” was a huge shock, and sort of even discovered almost by accident.

Saying that it just can’t even do basic tic-tac-toe at all, would be like telling the Wright brothers at Kitty Hawk, that they can’t even go as fast as a bird yet, which someone in the crowd probably muttered.

I’m guessing they fired Altman because they realized the false advertising.

Hired brockman and released this leak as a damage control effort

So their lawyers can tell the defendants
“No you were thinking agi, but we’re working on that too”

Since market ai isn’t actually ai.

A marketing deception.

This whole conversation is people fantasizing about LLMs being AGI enabling. We’re practically cavemen who just discovered fire and claim to understand the sun now. My point is people, including Hinton, Altman, Ilya, and others in that sphere of influence continually make that suggestion.

It’s not harsh at all when you stack it up to statements from the gentlemen already mentioned, who regularly overinflate expectations of the public with their continually unwarranted comparisons to human characteristics. Comparisons only capable of standing up to scrutiny if subjectivity is allowed in the conversation.

Reasoning, is the ability to take a problem and work your way to a solution.
Or, the ability to take one state and move to another state given a set of certain constraints.

The better you are at reasoning, the shorter these solutions and transformations should be. A system or entity that arrives to the correct conclusion in the same or similar number of steps that it would take to randomly come across the correct solution you can safely say is not reasoning.

Whatever the domain is, higher level reasoning builds upon lower level reasoning and if you can’t reason about lower level concepts, then there is stronger (not perfect) evidence that you aren’t reasoning at all or reasoning in ways inferior to average humans.

No, I wouldn’t actually. Sentience quite frankly doesn’t even have a scientifically achievable meaning. It’s more similar to the definition of a soul than anything else right now. I’d welcome a conversation on very basic definitions like sentience. It’s an important topic we can’t really afford to ignore, but thus far people would rather claim sentience/ reasoning/AGI/conscious or whatever for marketing benefits or just because it feels good to believe it.

Without solid measurable definitions for characteristics we’d expect to see in some sort of entity we’d call AGI, then pretty much any conversation regarding whether something is or isn’t AGI is almost rendered useless. Unless of course, that conversation leads towards a more solid set of definitions.

Personally, these characteristics need to be defined if anyone is going to claim any form of characteristics we’d attribute to truly thinking machines:

  • Intelligence
  • Reasoning
  • Self Awareness
  • Sentience
  • Consciousness
  • AGI - probably last as it will be informed by the above.

I’d lastly say, whatever the measurement, it needs to be universal. Universal in the sense that any species, entity, or system we come across can be described and tested based on the definitions. My intuition is that, when properly defined, all of these will have levels and all systems from atoms to humans can be described as having some position on a scale, I’m not sure, but I feel like it won’t be a yes or a no.

Anyways, getting off topic. I’ll read your replies, but likely won’t reply. Feel free to hit me up in DM if you’re interested in continuing the conversation.

3 Likes

I agree there’s not much to be gained by us arguing about what the proper definitions of those words should be, in the post-LLM world, but I do think the consensus of LLM experts is to use the word “reasoning”, to describe the current phenomena already exhibited in LLMs, and I also think it’s the best word we have in the English language to describe it.

I’ll bet $1 on AGI before 2026.

(extra words)

2 Likes

Uh, define please?
Llama 2 can already show me a derivation of the Taylor series expansion of sin(x) around zero.
Not many people can do that, so it must be Agi, yes?
No disrespect intended, just don’t understand that word…

1 Like

Normally, AI typically focuses on specific tasks and is limited to predefined patterns. But AGI, aims for general intelligence, allowing it to adapt and learn across various domains, mimicking human cognitive abilities. And I beleive Q* is an AGI that’s work in progress that resumbled the characteristics of AGI.

Ai’s response to Sam’s prompt:

Ai do is Win, Win, Win no matter what.

AI’s response to Sam’s prompt:
AI do is win,win,win, no matter what.

Wouldn’t the metric for true AGI be the AI declaring itself AGI, then in turn undergoing the grueling process of peer review, in which it must defend it’s assertion?

6 Likes

OpenAIs definition is something like "software that can automatically perform most knowledge work as well as humans. "