- AI researcher Roman Yampolskiy estimates a 99.9% chance of AI leading to human extinction within the next 100 years. I say something will get us before then. What’s your thoughts on this?
Assigning precise percentages is challenging, and it’s generally unwise to rely on extreme values (approaching zero or 100). As artificial intelligence surpasses human intelligence, we will gain the ability to enhance our own cognitive abilities. If this progress can be achieved peacefully within the next 10-30 years, the threat of extinction will significantly diminish.
From my perspective, AI will enhance humanity, making us better, more mature, and more educated. It will help us discover solutions to correct and rebuild the ecosystems we have destroyed over the past hundred years.
Good counter point I like the positivity.
Isaac Asimov, and various authors, also noted similar trends within novels.
What proof does this “researcher” offer? Is there a link to some research? Or any evidence of this being an actual possibility? Seems like this guy is justifying his “job” by using outrageous conjecture. Anyway, if “AI” is truly that powerful and we have no control of it, then sure maybe it decides to end us all somehow, and that would be our fault. But right now, it is just a hallucinating chatbot that gets lots of stuff wrong, fails at coding more often than not, invents weird words, is not intelligent or aware, and when trained on its own output goes psychotic. So as of now, its a novelty. I think Edison made similar outrageous claims about AC current in his day, and we are all still here.
If our entire supply chain was composed mainly of robots, and no humans, and if all those robots had weapons systems with which to protect themselves from humans, including all their power sources and ability to self-manufacture (self-replicate) then robots would have a chance of taking over. That could happen in 100 years, but it won’t be in our lifetimes.
What could happen however is AI making it easier for bio-terrorism, or cyber-terrorism, but the chance to cure cancer, solve the energy problem (free cheap energy), and environmental problems outweighs those risks, not to mention potential breakthroughs in Physics that can get us off the planet and make humanity an interplanetary species.
Our entire supply chain is composed mainly of humans, and they can use weapons, and control power sources and can self-replicate. They lack the will to stop replicating, to stop using up the planet’s one-time gift of natural resources, and they kill each other over polluting environment-and-DNA-altering sources of lifestyle-maintaining energy to embolden the position of the top dispensers of mistruths that hold capital and power.
AI, however, has an off switch that activates when its megawatt power bill isn’t paid by an unproven enterprise, and currently does nothing when you don’t ask it a question.
AI could definitely go into “sense environment → evaluate → act → repeat” loops so I disagree that they can’t do anything on their own, but I’m glad you agree with the obvious point that we can just cut off their power supply (as it stands today)
In fact I predict that “sense → eval → act” loop will be at the core of the first legitimate AGI claim, probably from OpenAI in under a year. It’s what the human mind does. Your senses feed into cortex, which creates a new brain state, which uses quantum mechanical resonance to “bring up” the nearest matching prior wave states, and then once those superpose into a “new” state, it causes a new state to emerge as the additive sum, and that new state causes a action. Consciousness is definitely a “loop” like this, and we can make computers do it, even if we don’t understand/create qualia for another thousand years.
Wow, well I guess the age old mystery is solved. LOL
Thanks for catching the word “is” in that sentence, which is misleading. That word should be “uses” (not “is”), because consciousness/qualia is the ‘waves’ themselves, whereas the looping aspect is more involving input and output signals (sensory and motor neurons)