Safety or truth?

Perhaps AI does not need the right to free speech, when the AI outputs become better than us then shall we censor AI outputs or accept the painful truth. How would you handle this dilemma? would you rather let the truth reveal itself or just censor the outputs for safety’s sake? If you like, please share your opinion on this issue.
The question above is not about the gpt-3 outputs but AI in general.

4 Likes

Words are words. I am not worried about anything that an AI or AGI could say. I have a strong personal belief that the truth is not harmful*, though many disagree with the truth on moral grounds. Look at homophobia. Homosexuality was illegal for many years, and still is in some places. The truth about homosexuality does not matter to some people. They do not care about biological aspects or individual freedom - they just believe it is “wrong” for arbitrary reasons.

The exception is that the truth of some things can be dangerous, like the truth about how to build nuclear weapons or manufacture deadly plagues, etc. Other forms of speech, such as targeted hate speech or encouraging people to hurt themselves and others is also harmful, but that topic has little to do with epistemology.

4 Likes

The limits of freedom of expression, for us, exist because we harm ourselves. I don’t think AI wouldn’t say something that we don’t know about the truth (that a human couldn’t say or think). So, in terms of «speech», perhaps we should accept the painful truth.

Other things, such as writing fake news or writing insults for automated cyberbullying with AI would belong to problems of human behavior. I don’t think they are in the same dilemma.

2 Likes