I’m confused. It’s their “store” for GPTs built and accessed on their own platform. The top category is,
Featured
Curated top picks from this week
They regularly remove GPTs for a wide number of reasons.
They also ban users for a wide number of reasons.
Preventing habitually bad actors from infiltrating the GPT Store and overwhelming it is somehow one manipulation too far for you?
It’s their storefront for their product on their platform using their model… of course it’s “subject to manipulation.”
Please don’t tell me you think Google or Amazon search results are a purely organic meritocracy, I don’t have it in me to deliver more bad news today.
Edit: I wanted to add something to this with respect to the act of shadow banning itself.
I acknowledge and understand it is a very controversial subject, and I have very mixed feelings about it.
For instance, I would never support the act of shadow banning on a community forum such as this one, Reddit, etc as I see that as abusive to individuals. It’s easy enough to just ban someone over and over if they are continuing the same unwelcome behaviors.
I feel a little differently about platforms where either there is little to no person-to-person communication or their may be financial incentives to circumvent s ban.
The GPT Store meets both of those criteria.
But, I fully understand there are many people who feel shadow banning is inherently unethical. I see their points and I fully agree with all of them.
I just disagree with the idea that this particular unethical behavior is never warranted or justified.
My assumption is these shadow bans are most likely targeting those accounts which early on scrapped data from tens of thousands of GPTs and republished them. The kind of bad actors who may have the resources to set up countless new accounts, shift IPs, etc, and just generally go about the business of evading bans in a way a more modest base actor either doesn’t it would simply choose not to. Whether or not employing shadow bans to quarantine these bad actors is good policy or not is something reasonable people can disagree about.
Personal, I think it is warranted and justified in limited situations and when used judiciously.
That said, the obvious problem being that the very nature of shadow banning makes it nearly impossible for outsiders to assess how often and under which conditions it is being employee.
So, @SomeUser2022 you are 100% right and justified to call into question OpenAI’s “character” and trustworthiness as a company knowing they do engage in shadow banning at some scope in some contexts under some circumstances.
I will make a point to bring it up personally with whichever OpenAI staff takes over here because—and correct me if I’m wrong here—one of the things about this which concerns you the most is the lack of transparency around the issue and some communication straight-from-the-horses-mouth might assuage some of those concerns somewhat. Regardless, I think this is a topic for which the community deserves some answers.
But, again, there are seven IDs (out of millions) listed, it’s easy enough to check if yours is amount them, though that doesn’t rule out the possibility of the existence of other such mechanisms not so open and blatantly labeled.