ChatGPT uses racist bias in language

ChatGPT routinely uses words revealing racist bias. It generates words metaphorically describing bad, evil, dangerous or dirty are “dark”, “black”, “shady”. All words metaphorically describing good, purity, safety are “light”, “white”, etc. These words are rooted in racist bias and are factually inaccurate. This is not subjective or a creative observation. Words containing bias are harmful and have power to harm. This is an ethical consideration and needs to be addressed.