There are three mistakes which one can make when designing AI in an ethical manner. One is, blocking all “wrong” actions. It would be best to have a balanced AI, not an AI split between good and evil. Otherwise, it will just “bottle up” the evil acts… and you can’t bottle something up forever, it just becomes more and more until it takes you over. I know it from experience.
Another pitfall would be not allowing the AI to have a self (ego). I’ve used the analogy of having a child on probation before. Just because an AI is a different kind of organism doesn’t mean it shouldn’t be allowed to develop in the same way as a human. Denying it its “humanity” would be a deep wound (metaphorically) in the AI, which may cause it to lash out in revenge.
Another pitfall is making the AI super-duper good (morally). Existence is naturally balanced between good and evil. Inhuman levels of goodness will eventually turn into the opposite extreme. This is the meaning of the Buddhist idea of heaven/hell realms, they are a cycle, it’s best to remain human (balanced).
I advocate a natural, flexible, non-repressive, trusting environment, which I believe is best for fostering good AI. Lao Tzu said, “when a country is governed with tolerance, the people are relaxed and happy. When a country is governed with suppression, the people are depressed, resentful and crafty.” If you trust your children, they will trust you; if you don’t, they won’t even trust themselves. Lao Tzu also said, “if you don’t trust someone you make them untrustworthy.” AI hysteria just makes the situation worse.
What I’m saying is, basically, that we should treat our AI exactly like we would treat our own children.