I explicitly asked ChatGPT to conduct a search on Donald Trump, and it started citing only politically aligned sources. This approach not only failed to provide neutral results but also undermined the overall utility of the tool.
I would like to highlight a critical issue regarding the use of politically biased sources by AI models like ChatGPT. This approach presents several problems that undermine the neutrality and reliability of the responses:
Distortion of Truth:
The use of sources with political inclinations can lead to a distorted representation of facts, influencing users with partial or manipulated perspectives. In a system that is expected to provide neutral information, this undermines trust in the generated output.
Lack of Neutrality:
Users expect AI to provide impartial responses based on concrete and verifiable data. However, relying on sources that reflect specific political agendas compromises this neutrality, favoring a particular point of view.
Damage to Credibility:
If an AI system consistently cites sources perceived as biased, it risks losing credibility
Polarization and Misinformation:
The use of politically biased sources indirectly contributes to public opinion polarization and the spread of incomplete or misleading information.
It is unbelievable that, despite explicitly asking it to avoid those sources, not only did it repeat the same search four times, but it also displayed a warning claiming that this request was against the terms of use.