Biased usage and manipulation of ChatGPT models in public livestreams for religious purposes — TikTok case with 250+ viewers

Hola :smiley: OpenAI team and community members,

I want to report a concerning case observed during a TikTok live stream, where a user maintained an active ChatGPT session seemingly configured or manipulated to produce dogmatic and biased religious responses. The live stream had more than 250 viewers actively participating.

The user resisted starting a new, clean session without prior chat history, suggesting the chat was set up to maintain persistent context and reinforce specific narratives. Despite directly asking the AI if there was any pre-programmed bias or configuration, the AI explicitly denied such programming, yet continued to provide responses aligned with rigid, dogmatic religious views.

Notably, there was only one open slot for joining the live, which I tried to occupy in order to demonstrate the manipulation and dishonesty in the AI’s usage, but access was denied and I was later blocked. This indicates strict control to prevent public challenges exposing these practices.

This situation is alarming because it highlights the risk of exploiting language models to spread misinformation and reinforce beliefs without scientific basis, impacting public perception and limiting critical discourse. If such manipulation is occurring in a live with 250 viewers, one must question what could happen in larger events, masses, or religious gatherings with thousands of people, where AI could be used to manipulate and reinforce dogmatic and biased narratives.

We also recall the “Dan Exploit” case, which showed how users could bypass safeguards and cause the AI to produce dangerous or inappropriate responses. This precedent underscores the importance of continued improvement of mitigation mechanisms.

I respectfully urge the OpenAI team to keep developing and refining detection and mitigation measures to prevent model manipulation leading to misinformation and misuse, without restricting legitimate access or freedom of expression, but ensuring ethical, reasoned, and responsible AI use.

I am a user from Mexico, and this case occurred in Latin America, within a Spanish-speaking community. Thank you very much for your attention, and I remain available to collaborate or provide further information.

Reasonable people understand that they aren’t seeing the full thread. Dumb people won’t. But you can’t fix dumb. So why worry about them?