Well, they are only “concerning” for a small, but vocal minority of ChatGPT users who are “into” this politically charged topic.
Most users are OK with rules that make society less hateful and offensive. This is not an “easy topic” and, at least in my view as a developer (and this is a developer community), these “self censorship” categories are fine for a beta release of a potentially very influential technology:
So, I’m much less concerned about moderation and filtering than I am with humans making ChatGPT and OpenAI a political issue and a divisive issue, which they certainly will do (and are already doing).
Considering how much hate there is in society as well as violence against others and never ending sexual exploitation, etc. I think OpenAI has come up with a good “beta” approach to this.
See Also:
See Also: OpenAI Usage Policies
… and also:
In summary, I think OpenAI has done a much better job than many large tech companies in trying to start out on the “right foot” so their models are not used in harmful ways. Facebook, on the other hand, was very slow to implement such polices and in fact, still does a poor job today, in my view.