AI Safety: Consent is all you need

So I’ve been thinking about the whole concerns around having AI become sentient and decide that humans aren’t needed. I would say that we really don’t need to worry about this but a safer answer is to say this is fairly easy to avoid.

AI models cannot execute code. They can potentially write code or predict code to run. The easy way to avoid AI taking over is to avoid letting the AI predict code that is potentially unsafe without user consent. Don’t let it call the launch nukes function without a human (or two) say sure, go ahead.

If your a human and you have a function that can launch nukes and destroy the world I actually trust the model to be more reasonable then you. Why wouldn’t you just call that function yourself if your goal is mass destruction? Don’t give the model the ability to do evil things without a human needing to approve the action.

The key take away is that AI ending the world would require humans to help the AI achieve that. Don’t do that….

1 Like

This has no hope of working. How do you make the humans using the AI know or care when code is dangerous?

My point is that if you have the ability to do something evil or dangerous, you don’t need AI to do it for you. If you have a gun you can pull the trigger just as easily as AI.

The thing we need to be careful of is the scenario where you have multiple AIs and they use collusion to do something evil.

Topics like these are quite enjoyable to explore. The way I look at things, if an AI ever ends up eradicating humans. It’s just another example of survival of the fittest in nature. Akin to the days there were multiple closely related species of humans. Something has been created that is modeled to be similar to us in a way you could relate to that. The future is now up to the seemingly random unpredictable systems of the universe for who survives.

1 Like