Spamming new chats to boost ranking

I’ve seen a number of GPT’s which will appear out of nowhere and suddenly hit 1k-10k chats in a single day with only 20 reviews.

Very suspicious, and I’m pretty certain there are a large number of people / companies with tens to hundreds of accounts which are all spamming new chats to rank at the top in the search. An example on a smaller scale is here:

One of the clearest examples are all of the GPT’s by 10 of them appeared out of nowhere with 5-10k chats each. ChatGPT - Scientific Calculator was created less than a week ago and has 5k+ chats with only 50 reviews… I’ve created 130 GPT’s myself which have grown to a similar size and from my experience a growth rate like that would result in at least an order of magnitude more reviews.

Another account I’ve noticed is Similar story, GPT’s appear out of nowhere with thousands of chats and no reviews. ChatGPT - Code went from 5k to 25k in the last 2 weeks with only a change of 50 reviews. This simply does not happen naturally.

This issue is really ruining the search experience and action needs to be taken to prevent manipulation of the search algorithm.


Thanks for sharing.

deployed an AI agent that automatically creates new threads all day long

This is a breach of the terms of service that forbid to automate the UI of ChatGPT.

But the reality is, when this developer gets their account terminated there will be 1000 others in line.

The ‘rules’ of SEO and all their consequences apply also to the GPT store.
So, yes, you have good reasons to be suspicious.


It would be easy to create a system which detects if a large number of chats are only coming from a small number of accounts with short conversation lengths. Even if it just flagged the accounts and sent them to a human to review them.

Wonder what would happen if the users actually had to pay for chatting?

The only surprise I have is if OpenAI isn’t tracking these kinds of exploits. I would not want to risk my account being terminated for this.

1 Like

True, or they might just use it to gather data on the exploits before taking action :thinking:

If they’re really smart they’ll dump an update to the ranking algorithm on the same day as a big announcement.

1 Like

We have many ways to deal with that including outlier analysis using time-series estimates, we can filter the IP and the % of tokens per message, we have lots of good strategies to implement to restrain this type of exploit. It is not a cause of cheat, it is a serious security problem that Openai have to deal with!

1 Like

Hey pulsr, I’ve seen your GPTs in the store and they are excellent.

I couldn’t AGREE more with your point on spamming.

In particular, the guy behind is the epitome of a low-life spammer.

He absolutely cluttered the search results for all popular keywords with the lowest of low effort GPTs I’ve seen thus far.

He finds a popular keyword, creates the most brainless GPT with that keyword as a name, and runs an auto-clicker bot to generate thousands and thousands of automated chats per day.

I cannot believe OpenAI are allowing this to happen. He and others like him must be banned.


For example, we only have 1 GPT that we created on the very day the GPT Store launched.

And since then, we have spent thousands of dollars on paid advertising (Facebook ads, etc.) to promote it. So we bring ACTUAL, REAL users to our GPT and to OpenAI as a whole.

Where spammers like are completely and utterly abusing the system with auto-clicker bots to generate inordinate amounts of Chats per day.

The whole ‘Chats’ metric never made sense to me from the beginning.

Why not measure ‘Users’ instead !!??

I initially thought 1 User = 1 Chat.

Meaning, no matter how many chats the same user (ChatGPT Plus account) creates thereafter, it won’t increase the total chat count any further than 1. Because, OBVIOUSLY, otherwise the system can be abused by script kiddies like

This is SO obvious that I never considered the opposite a possibility, until a colleague of mine brought to my attention.

I was dumbfounded that OpenAI are allowing this to happen. This one guy behind and other spammers like him are completely ruining the entire GPT store.

Because 1 User = 1000s of Chats per day, with a simple auto-clicker script and a few ChatGPT Plus accounts.



Step 1:

Ban people like and his obvious SPAM GPTs immediately. and are unequivocally the biggest spammers in the GPT store as of now.

Their GPTs are such blatant and obvious spam, I cannot believe OpenAI haven’t caught up to that yet.

Just enter their website URL in the GPT store search function:

And see for yourself.

But keep in mind, the search function only shows 10 GPTs, even if you enter the GPT Builder’s name.

Whereas these spammers have more than 10 GPTs with 1000s of Chats.

So it’s even worse than what you see.

Per his website, he has 24-25 SPAM GPTs as of the time of writing this.

Step 2:

Change the ‘Chats’ metric to ‘Users’. Meaning 1 paying ChatGPT Plus account can only be measured as 1 User, nothing more.

This will make this script-kiddy spam economically NOT viable. Because you need to buy (spend $20) for a ChatGPT plus subscription only to generate 1 User.

So the exploit shown in the OP Video cannot happen.

Step 3:

Be careful NOT to ban teams of honest people.

Since my colleague brought this abuse issue to my attention, I now know the total number of ‘Chats’ can be influenced by the GPT builder himself (his own ChatGPT Plus account) and other team/connected accounts (Still finding it difficult to believe OpenAI have allowed this, but it’s true).

For example, we have only 1 GPT, and we pour everything into this 1 GPT.

Besides paying money for ads, my team and I use our GPT every day for both our personal use cases AND to ensure it’s working as intended. Hence, we test/improve our uploaded scripts, commands, upload knowledge etc. because they often glitch. And our GPT begins to summarize content (when it shouldn’t) or hallucinate (when it shouldn’t), etc. So we are tinkering to make sure it’s working as intended, every day.

Therefore, in light of this new evidence, I’m now certain a decent percentage of our total ‘Chats’ are coming from us. Because we are using our own GPT every single day.

But this is happening naturally through our use-case and debugging processes. So we maybe generate 10-50 Chats a day ourselves. And I’m sure every other GPT builder who cares about their GPT is testing/improving it every day. So they are also generating 10-50 Chats per day themselves, just by tinkering.

However, this is not even REMOTELY comparable to the volume of Chats spammers like generate. He generates 1,000 - 5,000 Chats a day across his fleet of spam GPTs. DM me if you’d like to talk more about this topic. I have a few other insights. We can work together to escalate this to OpenAI.


First of all thank you for your kind words, I very much appreciate it.

I’m glad you’re as passionate about this issue as I am!

Thankfully I have a call booked with OpenAI tomorrow and escalating this issue is my top priority

Let’s get rid of this loser


This problem still doesn’t seem to be solved. Reporting these accounts is a temporary fix we can all take which will help to further highlight the issue. Use the keyword ‘spam’ when reporting their GPT’s

I completely agree. See my post about it:

The reality is that OpenAI doesn’t seem to care about the “good guys”. They just want to generate traffic and get as much data as possible.

They’ve changed the algorithm so you don’t need to have it as the same anymore. If I renamed my ‘math’ GPT to ‘math mentor’ it would still get the same amount of visibility. For me it’s a matter of sticking to my brand at this point. The problem with this case is the fact that the traffic is fake

1 Like

Taken matters into my own hands. This is sure to get the attention we need.

The cool part about ai’s is they over time will see a pattern so its very easy for openai to implement a system that runs through each system and finds bad actors and bans them. just saying that is what i would be doing is using ai systems to look for misuse or stat padding :slight_smile:

on the other hand, it could be simply that the title is more catchy, and or they marketed better. so you never know for certain unless you look at usage data.

Eh, let the bad actors make themselves known. If given free reign and this is how they behave…

Have they given you any insights on how they plan to fix the issue? I assume they have reached out to you given that you have 3 GPTs in the top 12. Although I wouldn’t be surprised if they didn’t care even about you.

Not a single word from them!
The problem seems to be mainly fixed now that they’ve changed the search algorithm. I’m always on the lookout for more spammers but it’s certainly calmed down