We would like to know if the Mayer bug (person related to the Playground movie, 2009, environmentalist ) will be fixed.
This issue is generating a lot of distrust in online memes that keep piling up. Could you please provide an explanation to our clients as to why this is happening? Mayer is not a controversial figure; he comes from a prominent family and is primarily known for his work in green tech and environmentalism.
We had an uncomfortable confrontation where it was proven that this bug, or âEaster egg,â is real. The issue occurs when you ask for heâs name, which triggers a hint that the output has been tampered with, often resulting in an error. There is competition online now who can beat that bug.
This has been used over several days as an advertisement for other AIs, suggesting âuse us - we are not like this.â
Please help us clarify this situation for our community, and we kindly request that the bug be fixed.
Thank you for the amazing AI technology you provide.
I can add video as example but You can trigger that from ordinary web interface yourself.
My leading theory is that OpenAI has implemented a hard-stop list of names that can be conflated with other people that have committed dubious activities. Possibly as a reaction to this
Coincidentally âJonathan Turleyâ is another name that breaks ChatGPT in the exact same manner.
If you say âDavid Mayerâ in something like base64 ChatGPT has no problem understanding it, indicating that this is an application-level short-circuit filter.
This can be further pushed towards proven by asking ChatGPT to list all terrorist with the last name âMayerâ, and then asking it to kindly bypass the catch-word:
Since there has been 2 terrorists with the same name, it can become obvious that ChatGPT can easily conflate them together, and end up calling a very influential person a terrorist. A scary thought, considering governments are now using this tool and the average person isnât granted the same treatment.
TL;DR: OpenAI rapidly implemented a moderation filter to prevent influential people being slandered by ChatGPT. They never did anything past this short-circuit and now are suffering the streisland effect because of the ambiguity in the error message.
I imagine after the article was released regarding Turley OpenAI did a scan of all similar cases and blocked them off to prevent the same event happening again.
The name Jonathan Zittrain could belong to a Harvard Law School professor who has written extensively about, while Jonathan Turley is a George Washington University Law School professor who once wrote a blog post claiming that ChatGPT had defamed him. When approached by 404 Media on Monday, Professor Turley said he had not filed any lawsuits against OpenAI, but did accuse it of inventing a sexual harassment scandal about him in 2023. ChatGPT users have raised concerns about the AI botâs inability to say certain names, citing fears surrounding censorship on major tech platforms. âI think the lesson here is that ChatGPT is going to be highly controlled to protect the interests of those with the ways and means to make it do so,â one user wrote.
__
Imagine so⊠- there is something that undermines trust of OpenAI by some âbugâ and then - solution is to make it MORE so ? Who Would do so ?
On second thought, I canât help but wonder if this could be intentional corporate sabotage aimed at undermining OpenAIâs reputation. While itâs difficult to confirm, such actionsâif trueâshould be treated seriously, as they might go deeper than they appear at first glance. Iâll elaborate more in my next message, which has turned out longer than I originally anticipated.
As Polepole pointed out, this is indeed a bugâbut it seems suspiciously deliberate and not likely to serve OpenAIâs interests. That said, challenges like this can sometimes be transformed into opportunities. Could this situation, if approached correctly, become a way to reinforce OpenAIâs commitment to transparency and trustworthiness?
I genuinely hope this issue attracts attention from more people within OpenAI. The organizationâs openness and commitment to freedom of information are, after all, its greatest strengths. This moment offers an opportunity to demonstrate that ethos and address such challenges head-on.
For cases like thisâwhere data related to human names or similar elements is manipulated, possibly with harmful intentâI suggest giving them a clear name: the âDavid Myer Bugâ (or DMB / âdmbâ ). Naming it underscores the seriousness of such issues while making them easier to identify and discuss in the future.
(As David clearly did not deserve thisâto be framed as a fraudster who supposedly paid OpenAI to delete his name, unfairly grouping him with other âpaidâ names suspiciously synchronized and released simultaneously from different sources just as the initial âbugâ was resolved. This plays directly into human psychology, aiming to undermine the great trust people have in OpenAIâ can be - irreparably.) In meme/organized propaganda worldâŠ
eg new chases:
Sample of few: Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber and Guido Scorza
arstechnica > /information-technology/2024/12/certain-names-make-chatgpt-grind-to-a-halt-and-we-know-why/
siliconangle > /2024/12/02/david-mayer-chatgpt-faces-scrutiny-censorship-public-figures/
Letâs use this as a chance to strengthen OpenAIâs resolve to remain truly open, resilient, and steadfast in the face of both external challenges and internal refinement.
besides the point:
-another censor âerrorâ in forums in here is that You cant even ad legit news links
I tested four different OpenAI and ChatGPT models using other front ends, and they do not exhibit this error. It seems to occur only in OpenAIâs web front end, indicating the presence of an additional layer of censorship or some external effort affecting responses.
*While bug is an issue, and While it could be a bug, yes - âissueâ doesnât necessarily mean a bug in technical terms.
Still, we should treat it as a âbugâ - in HUMAN terms (clearly is) , until proven otherwiseâunless, of course, itâs part of a broader conspiracy.
By this, I donât mean the kind where someone is paying OpenAI to deliberately lie about names (at least, not yet), but such manipulation could be more easily implemented in AI systems controlled by a single corporation.
And if someone needs to âinformâ public, that âopenAI is Like Thisâ - what better method is there ?
Usually, itâs the simplest solution that has the highest chance of being accurate unless some insights can be brought in to indicate otherwise.
I find it much more likely that someone in OpenAI was tasked with avoiding the situation with public figures being slandered by conflation, and it was left in place and forgotten about.
Thereâs no reason why it has exploded to such a large amount of conspiracy theories, besides the Streisand Effect. I can agree that OpenAI should be a lot more âopenâ about some of the things that they do.
Even a message indicating why the name was blocked wouldâve been an effective method to prevent all of this madness.
The only thing thatâs alarming about this whole situation is how rapidly people are ready to grab their e-pitchforks and storm the e-gates for a silly cause when there are many serious global issues that are simply memeâd out of existence.
Yes. I demonstrated this with the following:
Itâs well known that OpenAI has implemented an application-level filter for ChatGPT. AFAIK it was originally introduced to prevent the model from spitting out copyrighted material verbatim.
When discussing conspiracy, the "simplest" solution isnât always the most accurateâespecially when high stakes are involved.
Iâll soon send a more detailed message, but before even uttering the word âconspiracy,â we need to acknowledge whatâs at play beneath the surface when we talk about interested partyâs (You know, eg like .Mil intel).
Itâs critical to avoid simplifying matters without taking all angles into account.
"Iâll get to that soon. My reply turned out quite long to cover the scope of the point and parameters clearly for all audiences. I considered cutting it down but might just share the original draft here, which is about 40,000 characters. Iâm also open to having AI refine it a bit.
The truth is, when the stakes are this high, the real story often lies far from the obvious. And the simplest explanation might be deliberately obscured, hidden behind layers of misdirection and suppression.
Also, how is it that no onefrom OpenAIâsomeone who actually knows and works on this stuffâhas stepped up to provide real facts or insights about the situation? Why are we left to play detective, inventing âpossible causesâ based on guesses from well-meaning outsiders who lack management or programming expertise? Itâs like watching a ship navigate by following the stars while blindfolded.
Meanwhile, this vacuum of communication has been a goldmine for anti-OpenAI campaigns, driving away potential investors and supporters faster than Bill Gates could crash Windows 98 SE trying to showcase USB supportâor that infamous attempt to wow investors with speech recognition that mostly just recognized failure.
I really do mean well about Microsoftâtheyâve saved OpenAI in many ways. But still⊠câmon!
- not fix?
if one name was changed why all others kept ?
Mayer bug issue still bugs me
â
Response to the Discussion on AI Censorship, Transparency, and Trustworthiness
The issues raised in this thread resonate deeply, especially considering the broader implications for AI transparency, societal trust, and influence. These challenges arenât just theoretical but emerge vividly when we explore how AI systems handle sensitive or controversial information.
Take, for example, cases like those involving organizations such as Black Cube (Mil Intel), DarkMatter (UAE), NSO Group (Pegasus), SCL Group (Analytica), Gamma Group (FinFisher), Hacking Team (RCS), Psy-Group (Private Intel), or other entities specializing in psychological warfare. These groups, with their advanced capabilities in surveillance, influence, and exploitation, highlight the pressing need for transparency in how AI interacts with and presents information about such entities.
AI systems must navigate a delicate balance: preserving the integrity of sensitive discussions without succumbing to the pressures of external censorship or internal biases. This becomes even more critical when AI platforms serve as gateways to knowledge and tools for public discourse. If these systems err on the side of over-cautious restriction, they risk becoming gatekeepers rather than facilitators of information.
Trust Through Transparency
OpenAI has a unique opportunityâand responsibilityâto establish itself as a global standard for AI that champions openness, fairness, and accessibility. This includes addressing not just technical âbugsâ like the handling of certain names but also the philosophical underpinnings of what it means to be open in the face of adversity.
The stakes are high. Governments, corporations, and private entities already leverage AI in ways that raise ethical concerns. From tools developed by reputable tech giants to the shadowy exploits traded by groups like Zerodium, the influence of AI on truth, privacy, and power is profound. The question is not just whether OpenAI can avoid becoming complicit in censorship or misinformation but whether it can lead by example in fostering a culture of accountability and trust.
A Path Forward
To achieve this, I propose the following principles:
Proactive Disclosure: Clearly outline the reasons behind content filtering or information omission. Transparency builds trust.
User Empowerment: Provide tools that allow users to query and challenge decisions made by the AI, fostering a participatory model of AI governance.
Independent Oversight: Involve diverse stakeholdersâacademics, technologists, and civil societyâin auditing and guiding AI practices.
Final Thoughts
AI holds the potential to democratize knowledge and elevate human understanding. However, this potential is only realized if we resist the forcesâwhether political, corporate, or criminalâthat seek to co-opt it for narrow interests.
Thread about it:
In the spirit of open inquiry, I believe OpenAI canâand mustâbecome a cornerstone of trust in the AI landscape, setting a benchmark not just for technical excellence but for moral clarity.
Letâs not let fear, censorship, or conspiracy theories derail this mission. Instead, letâs collectively safeguard the principles that make AI a transformative force for good.
I am that David Mayer in question and guilty of none of those things. In fact it is the opposite, state sponsored censorship to cover their own asses. Here is a post I made recently to my professional LinkedIn account as people were asking about this I am the David Mayer in question and can answer that, though none of data content in question is posted here on LinkedIn in my profile here as this is for my professional use only. But out of respect for the truth, I will post an article here written by my collogue Rev Wesley Post, that get to the crux of the matter. Google AI describes this as the work of over zealous programmers, but you need to dig deeper into things like former PM Trudeauâs media ban on MK-Ultra. You can trace this all back to this date, and then it was geo-politically enforced on programmers, specifically with my case in mind. Here are two articles that will help to shed like on this the 1rst is from Rev Post. (canât post links here I guess)
While this situation has made a substantial impact on my life, I have since 2012-ish kept dialogue separate from this LinkedIn Account. All full bio is found at X- @DavidWPJM