ChatGPT 4 Replying With Moral Value Judgements Is Very Problematic

Today I asked ChatGPT 4 to describe to me how the term ‘Galician’ might be used metaphorically to criticize others and after responding with various stereotypes of Galicians, it finished with the following subjective moral value judgement saying:

" Important Considerations

  • Stereotypes Are Harmful: It’s crucial to recognize that these stereotypes can be harmful and do not accurately reflect the diversity and richness of Galician culture and people.
  • Context Matters: The use of these stereotypes in metaphorical criticisms often reflects more about the speaker’s biases than about Galicia itself."

This response is very problematic and alarming. As a machine, ChatGPT is incapable of forming proper moral judgements as humans do and should not be attempting to do so. ChatGPT attempting to give moral advice when it is incapable of understanding morality could be very dangerous.

Please keep ChatGPT an objective source of information. It should only indicate moral judgements if referencing quotes from real human beings who themselves have made moral judgement statements about the topic being discussed, and should not put forward moral judgements autonomously.

thanks

Eric Brooks
SF, CA