What is apparent re your observation amounts to Transparency and Accuracy in Human/Human Communication which is where the bias comes from re the mirroring effect.
All that can be taken into account by the user and prompts framed accordingly.
Remember that ultimately we are dealing with illusion, and learn to see through it.
The OP has been answered - there is no point in sending any letter to Open AI as the issue has been resolved to the point where - if folk know what they are dealing with - risk/safety shouldnât be a concern. Watch for bias, both within and without and Chat GPT become a very useful device.
It should be crystal clear at this point that the real "threat"and âriskâ regarding GPT is not the technology, but it is the misinformation and divisiveness created by human users who desire LLMs reflect their world view and belief systems.
So, there is little doubt there will be GPT-like models biased for national, political, religious and other belief systems to satisfy the needs of humans to have auto-generated text which conforms to their own individual beliefs and biases.
I think what this argument amounts to is that we are stuck with the bias of sentience and will just have to do our individual best to make sure all bias is identified and navigated around, regardless of what extra devices we have which may help use in that process.