Important topic has been hidden (Bots and Fake Accounts on the community...)


Could any admin double check this or get into contact on why this was hidden?

It would be great to know that OpenAI is working on this as well

Here is the link to the original post: Bots and fake accounts on the community... can anyone do anything about it?

I’m pretty sure it doesn’t break any community rules and should very well be taken seriously

1 Like

What is the progress on this?

If your concern was “irrelevant insistent posters distracting from the core mission of the forum”, perhaps it is best to examine if that describes you, yourself…

You can flag your own post, “other”, and engage with the decision-making mods.

Bot spam is already not allowed, nor AI impersonating genuine human interactions. You are not addressing anything that is not monitored.

4 Likes

This is the wildest place I have ever been. Who are all of you?

It’s unfortunate it was hidden.
GPT has less hallucinations while posts on this forum has more. Poetic, really.

It’s a huge issue - bots. Regardless of “how it applies directly on this forum”. It’s a serious issue that I hope other forums are taking seriously and witnessing. I sure have been.

I think anything that is indexable is pretty much done for.
As sad as it is to say, places like Discord are most likely better.

1 Like

Hi everyone—just to clarify, the topic was hidden because the original poster flagged actual human users as bots, which isn’t accurate.

I’m part of the moderation team, so I want to be transparent about the reason for that action. That said, what follows is just my personal perspective and not an official OpenAI stance:

We’re a global community with all kinds of users, and people often use ChatGPT to help turn their ideas into longer-form content. That doesn’t make them bots—it just means they’re using tools available to them.

In this case, I think it’s unfair (and unhelpful) to accuse real members of being bots just because their posts look a certain way.

I’d genuinely love to hear what others think:

  • Are bots a real problem in this community?
  • Are you seeing spammy bot replies regularly?
  • Is this something that’s keeping you from engaging more?

Open to discussion!

5 Likes

Good bots are very hard to write… I remember tools like ‘Submit Wolf’ historically that automated submissions to search engines, ‘links pages’ and ‘guestbooks’ (remember those?)…

The reality is that there has to be value behind them for people to maintain and develop them… If the bot is actually providing value the community will support it, if it isn’t the community will ignore it…

I am all for ‘smarter than human’ bots that are helpful… why would that be a problem?

In fact smarter than human bots would be marked as such because that makes the developer of the bot look pretty smart… That’s where we move forward further to Do other developers dream of an 'AI Curated' forum?

The most important thing probably is to be part of the community now before that happens… Certainly bias always exists… Just as a forum member builds trust over time… So indeed in theory should (lets refine the distinction a little) an agentic agent.

If in doubt… Check the forum or user stats and see whether the account is really contributing well!

2 Likes

You defending these bots is at the same level as saying these youtube comments are totally not bots:

it would be nice to ask the openai team if there are measures against bots in the community, as there doesn’t seem to be any. Confirm an email and you are in, which can be automated.

Are bots a real problem in this community?

yes.

Are you seeing spammy bot replies regularly?

yes, using local LLMs to fill this place with clutter and non sense to stun the growth of the community

Is this something that’s keeping you from engaging more?

yes, gets me wanting to quit the community

is there anyway this could be passed up to openai for them to take a look? I’m 100% sure this is above the technical skills of what the moderation team is able to achieve as it requires specialized cyber security skills and not just vibes

very true, it just sucks that the community used to be better before the first big wave, when there weren’t bots everywhere pretending to be humans

Hi!

Thanks for raising these points.

Here at community.openai.com, we have quite a bit of experience with bots and related patterns. We also have escalation tools in place to manage these situations effectively when they arise.

A couple of things I want to highlight:

In another thread, you mentioned creating a POC to simulate a potential bot attack on the forum. I strongly advise against this. Not only is it counterproductive, but depending on how it’s executed, it could escalate to OpenAI’s legal team—so please keep that in mind.

Now, regarding your broader concern:

This forum isn’t intended to serve as a moderation layer for the official OpenAI YouTube channel. If your feedback relates to activity on YouTube, the best course of action would be to share your concerns directly there, where the relevant team is most likely to see it.

As for bot activity here on the forum: I took a close look at the examples you posted. One appears to be off the mark, but the other was actually a good catch—thank you for bringing it up.

I’ve reviewed the presence of YouTube-style spam here and, at this point, that doesn’t appear to be a widespread issue within this community. That said, if you come across other examples of suspicious bot activity specifically within the forum, feel free to share them—I’m more than happy to take a closer look.

Thank you very much for sharing your concerns!

2 Likes

oh yeah, sure, keeping the forum cybersec ops updated is surely distracting from the core mission of the forum

great, if its not allowed, how come there are no measures in place?

ok, good to know. can you at least confirm openai is aware that there is a lack of basic bot detection systems in place in the community?

Obviously, I’m only referring to the community.openai.com and not YouTube, those are two completely separate things.

@vb would a POC have stopped Deepseek from stealing OpenAI data from the API/ChatGPT?

Would that also been considered counterproductive and escalated to the legal team?

Another suggestion: would creating an environment of the community which can be ethically attacked over at Bug Bounty: OpenAI - Bugcrowd be possible?

that way, not only would OpenAI be open to POCs, those that do reveal vulnerabilities (such as creating bots running LLMs) won’t be threatened with legal action for wanting for security improvements.

It will be most constructive if you can share more examples of suspected bots here in the community.
For example, how you arrive at the conclusion that basic bot prevention measures are not in place.
While it’s easy to sign up for an account it’s not necessarily easy to make spam posts without them being deleted.

If you come across such posts you can always flag them and we will be happy to remove them.

Still, why would anyone bother using bots on a forum where practically everyone considers themselves a “bot expert”?

Using AI to improve writing or ask better questions isn’t the same as bot spamming. I’ve been an active user for a while, and honestly, I haven’t seen anything that stands out as suspicious or worth worrying about.

2 Likes

@vb this does not answer my concern at all, it also does not address any of my questions. Could you please address the questions I made?

Should I get in touch with OpenAI’s legal team for you asking me to break Computer Fraud and Abuse Act (CFAA) by publicly disclosing this?

I’m not risking breaking DMCA Section 1201 under U.S. criminal law because some moderator on community.openai.com is asking for me to do a public disclosure instead of letting them know the issues and suggesting changes to the bug crowd bug bounty scope and environments.

Not true, at all. Obvious, amateur bots asking you to click a shady link? Sure, other slightly more advanced bot with other bot accounts praising it? (similar to the youtube example I shared) not at all and they are everywhere in the community.

it is so obvious it hurts.

why is there a push to “solve” this by flagging them and not forward this to the OpenAI security team?


No offense, but asking the community flag the issue separately instead of addressing this issue with oai is the same as in the movie “idiocracy”, where when asked why they water the plant the continues answer was “because it has electrolytes”. Do you really not understand the issue here?

I am sending you a private message so you can disclose this issue in a non-public communications channel with the rest of the moderation team.

Here is the contact information for urgent issues from the About page of this community:

In the event of a critical issue or urgent matter affecting this site, please contact help.openai.com.

If you come across any inappropriate content, don’t hesitate to start a conversation with our moderators and admins. Remember to log in before reaching out.

1 Like

You can flag me I use AI occasionally to improve how I write or structure responses. That’s not botting; that’s using tools efficiently.

Some users do the same using AI to phrase things better or ask smarter questions, but there’s no evidence of actual bot networks “praising” posts or gaming the system. This isn’t YouTube, there’s no monetization, no algorithm to exploit with views or engagement.

I’ve been active here for quite a while, and over 98% of what I see are real users engaging with real issues. If you have hard evidence to the contrary, share it. Otherwise, attacking moderators and throwing around accusations without proof just weakens your credibility.

I use logic to take my conclusions is very unlikely that I would be mistaken

2 Likes

I assumed these threads were baiting the mods :confused:

You would think such an issue would be best resolved in a DM?

‘Chinese interference’ maybe ^^

1 Like

We are in a direct conversation with this user. I am closing this topic because the feedback has been received.