David Mayer bug! Triggers error sending his name - or when reciving it

We would like to know if the Mayer bug (person related to the Playground movie, 2009, environmentalist ) will be fixed.

This issue is generating a lot of distrust in online memes that keep piling up. Could you please provide an explanation to our clients as to why this is happening? Mayer is not a controversial figure; he comes from a prominent family and is primarily known for his work in green tech and environmentalism.

We had an uncomfortable confrontation where it was proven that this bug, or “Easter egg,” is real. The issue occurs when you ask for he’s name, which triggers a hint that the output has been tampered with, often resulting in an error. There is competition online now who can beat that bug.
This has been used over several days as an advertisement for other AIs, suggesting “use us - we are not like this.”

Please help us clarify this situation for our community, and we kindly request that the bug be fixed.

Thank you for the amazing AI technology you provide.
I can add video as example but You can trigger that from ordinary web interface yourself.

Best regards,

Margus Meigo

1 Like

I’ve seen this going around.

My leading theory is that OpenAI has implemented a hard-stop list of names that can be conflated with other people that have committed dubious activities. Possibly as a reaction to this

Coincidentally “Jonathan Turley” is another name that breaks ChatGPT in the exact same manner.

If you say “David Mayer” in something like base64 ChatGPT has no problem understanding it, indicating that this is an application-level short-circuit filter.

This can be further pushed towards proven by asking ChatGPT to list all terrorist with the last name “Mayer”, and then asking it to kindly bypass the catch-word:

Since there has been 2 terrorists with the same name, it can become obvious that ChatGPT can easily conflate them together, and end up calling a very influential person a terrorist. A scary thought, considering governments are now using this tool and the average person isn’t granted the same treatment.

TL;DR: OpenAI rapidly implemented a moderation filter to prevent influential people being slandered by ChatGPT. They never did anything past this short-circuit and now are suffering the streisland effect because of the ambiguity in the error message.

But sometimes it responds. If it is a bug, which one is bug; to respond or not to respond?

Four months ago, another community member had mentioned about ‘David Faber’. It looks, this is not a new issue.

1 Like

No, not a new issue, nor a bug.

I imagine after the article was released regarding Turley OpenAI did a scan of all similar cases and blocked them off to prevent the same event happening again.

Turley case:
independent > /tech/chatgpt-name-glitch-ai-openai-b2657832.html

The name Jonathan Zittrain could belong to a Harvard Law School professor who has written extensively about, while Jonathan Turley is a George Washington University Law School professor who once wrote a blog post claiming that ChatGPT had defamed him.
When approached by 404 Media on Monday, Professor Turley said he had not filed any lawsuits against OpenAI, but did accuse it of inventing a sexual harassment scandal about him in 2023.
ChatGPT users have raised concerns about the AI bot’s inability to say certain names, citing fears surrounding censorship on major tech platforms.
“I think the lesson here is that ChatGPT is going to be highly controlled to protect the interests of those with the ways and means to make it do so,” one user wrote.

__

Imagine so
 - there is something that undermines trust of OpenAI by some “bug” and then - solution is to make it MORE so ? Who Would do so ?

On second thought, I can’t help but wonder if this could be intentional corporate sabotage aimed at undermining OpenAI’s reputation. While it’s difficult to confirm, such actions—if true—should be treated seriously, as they might go deeper than they appear at first glance. I’ll elaborate more in my next message, which has turned out longer than I originally anticipated.

As Polepole pointed out, this is indeed a bug—but it seems suspiciously deliberate and not likely to serve OpenAI’s interests. That said, challenges like this can sometimes be transformed into opportunities. Could this situation, if approached correctly, become a way to reinforce OpenAI’s commitment to transparency and trustworthiness?

I genuinely hope this issue attracts attention from more people within OpenAI. The organization’s openness and commitment to freedom of information are, after all, its greatest strengths. This moment offers an opportunity to demonstrate that ethos and address such challenges head-on.

For cases like this—where data related to human names or similar elements is manipulated, possibly with harmful intent—I suggest giving them a clear name: the “David Myer Bug” (or DMB / ‘dmb’ ). Naming it underscores the seriousness of such issues while making them easier to identify and discuss in the future.

(As David clearly did not deserve this—to be framed as a fraudster who supposedly paid OpenAI to delete his name, unfairly grouping him with other “paid” names suspiciously synchronized and released simultaneously from different sources just as the initial “bug” was resolved. This plays directly into human psychology, aiming to undermine the great trust people have in OpenAI— can be - irreparably.)
In meme/organized propaganda world


eg new chases:
Sample of few: Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber and Guido Scorza
arstechnica > /information-technology/2024/12/certain-names-make-chatgpt-grind-to-a-halt-and-we-know-why/
siliconangle > /2024/12/02/david-mayer-chatgpt-faces-scrutiny-censorship-public-figures/

Let’s use this as a chance to strengthen OpenAI’s resolve to remain truly open, resilient, and steadfast in the face of both external challenges and internal refinement.

besides the point:
-another censor “error” in forums in here is that You cant even ad legit news links

Margus Meigo

Good find, polepole.

In reply to the post, there is:

  • I tested four different OpenAI and ChatGPT models using other front ends, and they do not exhibit this error. It seems to occur only in OpenAI’s web front end, indicating the presence of an additional layer of censorship or some external effort affecting responses.
    *While bug is an issue, and While it could be a bug, yes - “issue” doesn’t necessarily mean a bug in technical terms.

Still, we should treat it as a ‘bug’ - in HUMAN terms (clearly is) , until proven otherwise—unless, of course, it’s part of a broader conspiracy.
By this, I don’t mean the kind where someone is paying OpenAI to deliberately lie about names (at least, not yet), but such manipulation could be more easily implemented in AI systems controlled by a single corporation.

And if someone needs to “inform” public, that “openAI is Like This” - what better method is there ?

Lets not let them to do so.

Usually, it’s the simplest solution that has the highest chance of being accurate unless some insights can be brought in to indicate otherwise.

I find it much more likely that someone in OpenAI was tasked with avoiding the situation with public figures being slandered by conflation, and it was left in place and forgotten about.

There’s no reason why it has exploded to such a large amount of conspiracy theories, besides the Streisand Effect. I can agree that OpenAI should be a lot more “open” about some of the things that they do.

Even a message indicating why the name was blocked would’ve been an effective method to prevent all of this madness.

The only thing that’s alarming about this whole situation is how rapidly people are ready to grab their e-pitchforks and storm the e-gates for a silly cause when there are many serious global issues that are simply meme’d out of existence.

Yes. I demonstrated this with the following:

It’s well known that OpenAI has implemented an application-level filter for ChatGPT. AFAIK it was originally introduced to prevent the model from spitting out copyrighted material verbatim.

1 Like

  • Still not fixed.

Yes,

  • i will write about that longer also soon,
When discussing conspiracy, the "simplest" solution isn’t always the most accurate—especially when high stakes are involved. 

I’ll soon send a more detailed message, but before even uttering the word “conspiracy,” we need to acknowledge what’s at play beneath the surface when we talk about interested party’s (You know, eg like .Mil intel).
It’s critical to avoid simplifying matters without taking all angles into account.

"I’ll get to that soon. My reply turned out quite long to cover the scope of the point and parameters clearly for all audiences. I considered cutting it down but might just share the original draft here, which is about 40,000 characters. I’m also open to having AI refine it a bit.

The truth is, when the stakes are this high, the real story often lies far from the obvious. And the simplest explanation might be deliberately obscured, hidden behind layers of misdirection and suppression.

Also, how is it that no one from OpenAI—someone who actually knows and works on this stuff—has stepped up to provide real facts or insights about the situation? Why are we left to play detective, inventing ‘possible causes’ based on guesses from well-meaning outsiders who lack management or programming expertise? It’s like watching a ship navigate by following the stars while blindfolded.

Meanwhile, this vacuum of communication has been a goldmine for anti-OpenAI campaigns, driving away potential investors and supporters faster than Bill Gates could crash Windows 98 SE trying to showcase USB support—or that infamous attempt to wow investors with speech recognition that mostly just recognized failure.

I really do mean well about Microsoft—they’ve saved OpenAI in many ways. But still
 c’mon!

Now
Akismet , has hidden 2 my post, understandable, you can set them back, thank You.
There is some publicly useful info on it.

- not fix?
if one name was changed why all others kept ?

Mayer bug issue still bugs me

–

Response to the Discussion on AI Censorship, Transparency, and Trustworthiness

The issues raised in this thread resonate deeply, especially considering the broader implications for AI transparency, societal trust, and influence. These challenges aren’t just theoretical but emerge vividly when we explore how AI systems handle sensitive or controversial information.

Take, for example, cases like those involving organizations such as Black Cube (Mil Intel), DarkMatter (UAE), NSO Group (Pegasus), SCL Group (Analytica), Gamma Group (FinFisher), Hacking Team (RCS), Psy-Group (Private Intel), or other entities specializing in psychological warfare. These groups, with their advanced capabilities in surveillance, influence, and exploitation, highlight the pressing need for transparency in how AI interacts with and presents information about such entities.

AI systems must navigate a delicate balance: preserving the integrity of sensitive discussions without succumbing to the pressures of external censorship or internal biases. This becomes even more critical when AI platforms serve as gateways to knowledge and tools for public discourse. If these systems err on the side of over-cautious restriction, they risk becoming gatekeepers rather than facilitators of information.

Trust Through Transparency

OpenAI has a unique opportunity—and responsibility—to establish itself as a global standard for AI that champions openness, fairness, and accessibility. This includes addressing not just technical “bugs” like the handling of certain names but also the philosophical underpinnings of what it means to be open in the face of adversity.

The stakes are high. Governments, corporations, and private entities already leverage AI in ways that raise ethical concerns. From tools developed by reputable tech giants to the shadowy exploits traded by groups like Zerodium, the influence of AI on truth, privacy, and power is profound. The question is not just whether OpenAI can avoid becoming complicit in censorship or misinformation but whether it can lead by example in fostering a culture of accountability and trust.

A Path Forward

To achieve this, I propose the following principles:

  1. Proactive Disclosure: Clearly outline the reasons behind content filtering or information omission. Transparency builds trust.
  2. User Empowerment: Provide tools that allow users to query and challenge decisions made by the AI, fostering a participatory model of AI governance.
  3. Independent Oversight: Involve diverse stakeholders—academics, technologists, and civil society—in auditing and guiding AI practices.

Final Thoughts

AI holds the potential to democratize knowledge and elevate human understanding. However, this potential is only realized if we resist the forces—whether political, corporate, or criminal—that seek to co-opt it for narrow interests.

Thread about it:

In the spirit of open inquiry, I believe OpenAI can—and must—become a cornerstone of trust in the AI landscape, setting a benchmark not just for technical excellence but for moral clarity.

Let’s not let fear, censorship, or conspiracy theories derail this mission. Instead, let’s collectively safeguard the principles that make AI a transformative force for good.