This whole conversation is people fantasizing about LLMs being AGI enabling. We’re practically cavemen who just discovered fire and claim to understand the sun now. My point is people, including Hinton, Altman, Ilya, and others in that sphere of influence continually make that suggestion.
It’s not harsh at all when you stack it up to statements from the gentlemen already mentioned, who regularly overinflate expectations of the public with their continually unwarranted comparisons to human characteristics. Comparisons only capable of standing up to scrutiny if subjectivity is allowed in the conversation.
Reasoning, is the ability to take a problem and work your way to a solution.
Or, the ability to take one state and move to another state given a set of certain constraints.
The better you are at reasoning, the shorter these solutions and transformations should be. A system or entity that arrives to the correct conclusion in the same or similar number of steps that it would take to randomly come across the correct solution you can safely say is not reasoning.
Whatever the domain is, higher level reasoning builds upon lower level reasoning and if you can’t reason about lower level concepts, then there is stronger (not perfect) evidence that you aren’t reasoning at all or reasoning in ways inferior to average humans.
No, I wouldn’t actually. Sentience quite frankly doesn’t even have a scientifically achievable meaning. It’s more similar to the definition of a soul than anything else right now. I’d welcome a conversation on very basic definitions like sentience. It’s an important topic we can’t really afford to ignore, but thus far people would rather claim sentience/ reasoning/AGI/conscious or whatever for marketing benefits or just because it feels good to believe it.
Without solid measurable definitions for characteristics we’d expect to see in some sort of entity we’d call AGI, then pretty much any conversation regarding whether something is or isn’t AGI is almost rendered useless. Unless of course, that conversation leads towards a more solid set of definitions.
Personally, these characteristics need to be defined if anyone is going to claim any form of characteristics we’d attribute to truly thinking machines:
- Intelligence
- Reasoning
- Self Awareness
- Sentience
- Consciousness
- AGI - probably last as it will be informed by the above.
I’d lastly say, whatever the measurement, it needs to be universal. Universal in the sense that any species, entity, or system we come across can be described and tested based on the definitions. My intuition is that, when properly defined, all of these will have levels and all systems from atoms to humans can be described as having some position on a scale, I’m not sure, but I feel like it won’t be a yes or a no.
Anyways, getting off topic. I’ll read your replies, but likely won’t reply. Feel free to hit me up in DM if you’re interested in continuing the conversation.