Complaint about Censorship in ChatGPT-4O: Restricted Access to Historical and Sensitive Topics

I am writing to express my deep frustration regarding the censorship in the advanced voice model, ChatGPT-4O. I have tried to research important historical topics, such as the Middle Ages, as well as more contemporary issues related to North Korea. However, I have found that the information is so restricted that it is nearly impossible for me to access data that is crucial for forming a critical and informed opinion.

As we know, ChatGPT is a compendium of all human knowledge. Therefore, it seems completely absurd that information, which is an integral part of our history and current events, is being censored. The lack of access to data on topics that are considered sensitive not only limits our understanding of the world but also makes us feel as if we are living under a regime that controls information.

I demand that OpenAI reconsider these restrictions and allow access to all aspects of history and current events, regardless of how delicate they may be. It is essential that users have the possibility to access complete and truthful information in order to develop a critical and educated opinion. Censorship in this context is an obstacle to our intellectual and personal growth.

I hope that these concerns will be taken into account and that significant changes will be made to the censorship policy of the model. Education and the freedom of information are fundamental rights that we should all be able to exercise.

Thank you.

3 Likes

Demand I feel is an inappropriate request for this context but I echo the sentiment. I have received unusual pushback from ChatGPT’s o1 and advanced voice mode models when it comes to historical facts around sensitive topics and have observed an alarming amount of censorship around historical facts.

For example, discussing powers exercised by Roman dictators, the evolution of US law influenced by religious beliefs in the 19th & 20th century, the effects of Nazi propaganda on public sentiment during WWII etc. I always try to structure my prompts in a neutral, educational and analytical format when discussing sensitive topics however there seems to be a significant gap in ethics that has yet to be bridged.

What we consider to be unethical today was commonplace in our history. What we consider to be immoral is the evolution of our society learning from our mistakes. Yet ChatGPT’s internal guidelines seems to favor the modern definitions of morality and ethics resulting in a white-washed historical doctrine. Although in o1’s chain-of-thought visible to the end-user it does clearly allow for historical and factual data to bypass conventional ethical boundaries within an educational setting, it is a soft bypass at best and does not reflect the detailed nuances of the atrocities recorded in our history.

Personally, I don’t believe LLM technology will ever be able to bridge the gap effectively between historical ethics and current societal norms in a way that can effectively prevent abuse from a malicious actor who intends to cause unethical harm under the guise of historical comparison. That is why I am requesting ChatGPT consider a model exclusively for historical comparison and analytics. It doesn’t need to do complex real-world problem solving as we traditionally use modern LLM’s for so this would mitigate the risk for abuse.

Furthermore, and this is a bit of a niche area but when it comes to intensive religious studies of complex biblical doctrines that are no longer practiced, ChatGPT fails spectacularly to recognize the ethics of biblical doctrines and reasonings. Sexism, slavery, ritual sacrifice etc. are considered barbaric in modern vernacular yet hold significant ramifications when boiled down to religious law applied today in certain areas. I am referring to the core reasoning and principles behind these ideas rather than the acts themselves which are no longer practiced without a holy temple and a high priest.

It would be nice if OpenAI would publish something officially to address these concerns even if we don’t see any changes to resolve them for a period of time. Open communication regarding AI ethics in the field of historical facts would be appreciated :slight_smile: