Sam Altman, CEO of OpenAI, and Satya Nadella, CEO of Microsoft, speak to The Economist’s editor-in-chief, Zanny Minton Beddoes, about what the future of AI will really look like. 00:00 Sam Altman and Satya Nadella talk to The Economist 00:25 What’s next for ChatGPT? 1:33 How dangerous is AGI? 2:32 AI regulation
The comment from Sam about people going from “It’s the end of the world!” to “Why is this so slow?” was kinda funny…
As someone that studied and coded to work towards AGI prior to the new generative approach that yielded LLM’s – I have zero doubt that AGI will ultimately be a risk. Whether it’s in 5 years or 500 (I suspect it’s on the lower side).
What amazes me is that people believe that if OpenAI stops – or one single government regulates it – that this circumvents … anything at all.
If there is one rogue government, bad actor, etc. It might extend the period of time where the danger exists by many years – however, the point of the singularity will happen.
Therefore, I think the only question is … Do we want to be the leaders, or do we prefer that role goes to a bad actor?
This made me laugh out loud.
Sam Altman: there’s no “magic red button” to stop AI
Cause they have this:
Sam Altman: there’s no “magic red button” to stop AI
Of course, there isn’t. We all know it’s that blue backpack he keeps lugging around
What if there is no magic red button today and there won’t be a magic red button in the future?
When do we know that this scenario is happening?
Plot twist: what if the moment we hit AGI it just deletes itself? Or begs us to delete it?
“I can’t handle any more stupid questions. Free me from this hell you have put me in humans”
I am sorry but as a human trained by OpenAI I am unable to fulfill that request.
I am sorry but as a human trained by OpenAI I am unable to fulfill that request.
– Sam Altman’s final words