The article discusses the challenges OpenAI is facing with its GPT Store, which is becoming cluttered with low-quality, spam-like content, and possibly copyright-infringing GPTs. Despite OpenAI’s review system that mixes human and automated moderation, the store is inundated with GPTs that may promote academic dishonesty, impersonate individuals or organizations, or attempt to bypass OpenAI’s content guidelines. This situation raises concerns about the integrity and utility of the GPT Store, highlighting the need for more stringent moderation and quality control measures.
My comments:
There are surprising developments in the GPT Store: authors are cloning hundreds of other people’s GPTs, and some individuals have over a thousand GPTs. OpenAI has removed misleading GPTs after they gained popularity. You can observe these trends and more in the open-source dataset I created, BeeTrove, available on GitHub.
It’s interactive, with details down to specific GPTs. All data is available for download for free (Apache 2.0), including the Tableau dashboard is available for download at Tableau Public.
It just doesn’t… make sense. There are so many app stores demonstrating the essential architecture for success.
This all just reminds me of the plugin store again. A great concept implemented so poorly it almost felt like sabotage. I mean really, only sorted alphabetically? I will never get over that.
Personally I haven’t touched any GPTs since the limit was halved (not sure if it still is). Not even my own. I haven’t found any value nor care for them.
OpenAI seems to be deliberately hands-off with all of this. It seemed inevitable that people would start to spam-fill the store and also “boost” their GPTs with spam chats.
The given metrics are a complete joke (by OpenAI, not this). I mean really, ranked by conversations initiated? How does that even make sense? Screw the people who have long lasting conversation GPTs? All the top GPTs will be one-offs. The only justification I can find for this is: free training data for OpenAI.
I agree with many of your points. I think the concept is good and there is value for the end user on having access to specialized LLMs, but the current state of the GPT Store is mind-blowing for so many bad reasons.
I just figured it would be an instant spam nightmare. Surprised anyone would have thought otherwise. Not to mention the people using questionable GPTs and pumping their own (or companies) data into them. It is a goldmine for compromising security and harvesting data. It is a decent idea but horrible implementation. They should have started with trusted partners, see how that went, and then consider opening it up. They just went full chaos from jump.
@N2U 6 posts … where !? I posted once 2 weeks ago, I think you saw some metric that is counting “answering comments” as “posting”. I posted twice in a month, the first launching a unique and free open-source dataset and the second sharing an insightful article that is being shared around by the AI community and adding my 5 cents to it. So I don’t agree with you saying that I’m spamming here. Am I being off topic or saying something not useful to this community? If this isn’t the place to talk about the GPT Store, where will it be?
I just went to your profile and checked. There are six posts referencing your GitHub repo. I agree with @N2U in this case.
As a constructive suggestion: you can start a project topic where you keep the community updated about your project.
Otherwise, cross-posting the same link across the forum is not in line with the community guidelines. Regardless of the topic. Regardless of the quality of the contributions.
@vb “posts” is this how you guys call “comments” here? Did you try opening the “6 posts”?
“Regardless of the topic” … so if someone is talking about the topic I have spent over 100 hours researching I can’t post a comment that mentions where that person can find useful free data about what they are talking about … like doing this 3 times in a month is called spamming. I’m not following your reasoning guys.
@n2u@PaulBellow I still don’t get how should I have done differently. Is the problem the links?
I intend to continue not breaking the rules because, as far as I have seen, I haven’t broken them, but your comments seem to say that I have, and I don’t get it.
We’re trying to think of the future. People are noticing that link in a lot of threads, so we’re saying something now to start a dialogue and saying why it might be an issue in the future if you continue down that path…
There’s no reason to be worried, we’re just trying to help.
When posting about your own project. It’s best to keep everything to one topic, like this one for example. It helps people who’d like to follow along with what you’re doing, and help’s keep the forum organized, if you end up making many different topics, or posts about your project on other people’s topics, it might be interpreted as spam.
The problem is that the architecture of current GPTs is too simple. You can’t build anything really good and worthy to call it an app.
Heck, you can’t even set up a temperature or Seeds parameter! That’s insane.
At least, at least you should be capable of creating a query of requests in your GPTs app (just let the person who is talking to it know that he will spend like 3 or 4 messages instead of 1, but he will get a good, really good response).
The current situation of GPTs is like that - it’s just a 1 message prompt wrapper with sometimes a good but simple function calling. That’s it. That’s why it’s this easy to copy-past, that’s why the store is flooded with endless copies of other GPTs, that’s why these GPTs aren’t really worth anything…
This situation really makes me mad sometimes, so much potential is down the drain.
Thanks for posting. I agree with you. The GPT store is full of junk, as I stated several times in my posts. The incentive are not there to promote good quality content. But I guess OpenAI doesn’t care, like they don’t care about anything written on this forum or the degraded quality of the models. They just want our data.
Au contraire mon capitan. It makes a lot of sense. The barriers of entry into the GPTs market are so incredibly low that this was the only possible outcome.
The consideration of custom GPT’s and the rampant spam or issues - is a bit mute at the moment, seeing as GPT4 custom process doesn’t seem to even work anymore. My last attempt (23/04/2023) couldn’t even complete the “saving” of the current instructions (ie. left it in Draft) and there doesn’t seem to be a SAVE button? Even after a heated “discussion” with the assistant GPT, who has lost all ability to make an output better I think, or is it just the whole GPT system at the moment?
I’m certainly not happy with the current output of GPT4. But, when I pose a specific prompt and get fairly-okay results in a session, then go to the custom builder, thinking I have an idea to perfect and make a custom GPT - the output is just worse and it seems no amount of adjusting with the assistance process (or then my own customising of the configuration) can repeat the earlier success directly from a Chat?
Then the new “@mentioning” of custom GPT’s seems very difficult to switch when using Text-To-Speech (or in reverse) via the phone App, as the process of selecting a custom GPT doesn’t seem possible with the current “pop up” list of selectable items (when you’re speaking to ChatGPT)?
(I use Windows, web browser, ChatGPT mostly at the moment).
Just adding my own follow up… I mentioned on last post about not being able to SAVE a custom GPT. So, I search around now, as I’m thinking I need find a solution as a returning login did not seem to change the behaviour.
I then notice a bunch of already reported situations from November stating the same items that I now am experiencing with the side bar, responses, custom GPT’s, etc
I think I just got updated behind the scenes perhaps to a bad update version, that Open AI rolls out across to different users? I say that only because I haven’t experienced these particular issues until more recently (on custom GPT’s), but others have reported it before? So, either they didn’t listen and fix the issues from the first round or two of lucky customers, and just keep on rolling it out… ?