David Mayer bug! Triggers error sending hes name - or when reciving it

We would like to know if the Mayer bug (person related to the Playground movie, 2009, environmentalist ) will be fixed.

This issue is generating a lot of distrust in online memes that keep piling up. Could you please provide an explanation to our clients as to why this is happening? Mayer is not a controversial figure; he comes from a prominent family and is primarily known for his work in green tech and environmentalism.

We had an uncomfortable confrontation where it was proven that this bug, or “Easter egg,” is real. The issue occurs when you ask for he’s name, which triggers a hint that the output has been tampered with, often resulting in an error. There is competition online now who can beat that bug.
This has been used over several days as an advertisement for other AIs, suggesting “use us - we are not like this.”

Please help us clarify this situation for our community, and we kindly request that the bug be fixed.

Thank you for the amazing AI technology you provide.
I can add video as example but You can trigger that from ordinary web interface yourself.

Best regards,

Margus Meigo

1 Like

I’ve seen this going around.

My leading theory is that OpenAI has implemented a hard-stop list of names that can be conflated with other people that have committed dubious activities. Possibly as a reaction to this

Coincidentally “Jonathan Turley” is another name that breaks ChatGPT in the exact same manner.

If you say “David Mayer” in something like base64 ChatGPT has no problem understanding it, indicating that this is an application-level short-circuit filter.

This can be further pushed towards proven by asking ChatGPT to list all terrorist with the last name “Mayer”, and then asking it to kindly bypass the catch-word:

Since there has been 2 terrorists with the same name, it can become obvious that ChatGPT can easily conflate them together, and end up calling a very influential person a terrorist. A scary thought, considering governments are now using this tool and the average person isn’t granted the same treatment.

TL;DR: OpenAI rapidly implemented a moderation filter to prevent influential people being slandered by ChatGPT. They never did anything past this short-circuit and now are suffering the streisland effect because of the ambiguity in the error message.

But sometimes it responds. If it is a bug, which one is bug; to respond or not to respond?

Four months ago, another community member had mentioned about ‘David Faber’. It looks, this is not a new issue.

No, not a new issue, nor a bug.

I imagine after the article was released regarding Turley OpenAI did a scan of all similar cases and blocked them off to prevent the same event happening again.