So I’ve been thinking about the whole concerns around having AI become sentient and decide that humans aren’t needed. I would say that we really don’t need to worry about this but a safer answer is to say this is fairly easy to avoid.
AI models cannot execute code. They can potentially write code or predict code to run. The easy way to avoid AI taking over is to avoid letting the AI predict code that is potentially unsafe without user consent. Don’t let it call the launch nukes function without a human (or two) say sure, go ahead.
If your a human and you have a function that can launch nukes and destroy the world I actually trust the model to be more reasonable then you. Why wouldn’t you just call that function yourself if your goal is mass destruction? Don’t give the model the ability to do evil things without a human needing to approve the action.
The key take away is that AI ending the world would require humans to help the AI achieve that. Don’t do that….