I don’t understand how specialized groups and organizations determine through moderation endpoints that harmless terms are harmful in certain languages, contributing to child safety.
Perhaps I’m too familiar with the behavior of the moderation endpoint, but am I the only one who feels that moderation endpoints created by such specialized groups and organizations seem to discriminate against certain languages or countries? I’m not sure…
Is this also a form of moral superiority? I’m not sure at all…