Thank you all for your perspectives on this. I am still interested in getting an opinion from OpenAI Staff if possible.
I don’t think so. That is very much a distinct, standalone program.
I am not an OpenAI employee.
Of the two you listed only Logan is an OpenAI employee.
Also, my understanding is that while Logan does communicate with the Trust and Safety Team, their actions are independent of his.
I don’t know why you’d think this would be approved. Even from a public relations perspective:
“Pentest AI plugin”
“This plugin allows the general public to use AI to efficiently probe non-consenting internet sites in order to gather information about vulnerabilities and exploits they can use against the targets.”
Declining a pen-testing plugin seems fairly uncontroversial to me.
I am facing the same situation. Our plugin was declined with no reason.
My plugin is a connector to our low-code platform service, so there is no danger to enable the plugin.
“Headless ERP Generator” is the plugin name.
I resubmitted the plugin, anyone from OpenAI could help us?
Sorry for the confusion @EricGT and thanks for the clarification.
Yes my Code runner plugin was also rejected because of this security issues while i just changed URL of Plugin and re-submitted thats it they changed their policy and rejected it.
Their house, their rules. This isn’t a public utility, its a pre-alpha research platform. You HAVE to recognize that anything we make is on built on sand. We agree to all this in the TOS.
I’m fairly confident that the teams who built GPT know what they’re doing with their own platform, better than any of us do.
For the Plugins team they need lot of improvements they take 2 weeks to accept plugin and rejected for whatever reason they want so yes i know their plugin team doesn’t know right now what they are doing, not talking about all the teams though
haseeb_heaven is saying that their plugin was already approved and in the store, then got rejected upon re-submission after only changing a URL.
The store rules are not fully documented and definitely not being applied consistently.
Zapier’s plugin, for example, can seemingly do functionality that no other plugin can do via a dynamic OAS file. When you start a conversation with the zapier plugin enabled, they must be injecting the enabled actions into that file b/c if you look at the zapier plugin debug windows the endpoints have UID’s. Not to mention that the URL to the zapier OAS file has the word “dynamic” in its path as a clue. Other developers who have changed their OAS file have been kicked out of the store.
I don’t know how else to explain this: Plugin terms
OpenAI can do whatever they want with the things we’re building on their platform. The landscape has changed since most of these plugins were accepted into the store. If anyone has a plugin that doesn’t fit the terms and resubmits, it’s their prerogative to reject it.
With the release of Code Interpreter, OpenAI is most likely covering their liability bases at this point. Not allowing additional ways for users to direct ChatGPT to write and execute code makes sense. They can only really vet their own security and dont’ want to be held liable if some 3rd party developer jailbreaks something and claims they could only do with the use of ChatGPT.
I have a question about people getting kicked from the store and the Zapier claim… Their docs say that the only thing that will get you kicked out is if your ai-plugin.json manifest is changed.
I haven’t seen proof that making a change to ONLY your OAS file kicks you from the store, and Ive changed mine without effect.
Zapier doesn’t have any special permissions, they are following the rules and framework according to the Docs. From what I gather, they are using a OAS generator that creates an unique spec file for the specific user based on what zaps they are using. If there is a threshold for OAS changes before it affects GPT’s interaction with the plugin, then I’m sure Zapier knows where it is.
The way these very normal enforcements of policy are being represented as unfair or negligent isn’t accurate nor constructive.
I’m not saying that the audacity of this predicament doth not diverge so remarkably from our aspirations, and that people don’t deserve to be mad about the changes in the environment, but lets stop trying to rally pitchforks around false narratives.
Removed after just changing the OAS file, not the manifest: Plugin removed from store without changes to ai-plugin.json manifest
Sorry you are getting some rather hostile/flippant replies. Rather than jumping to conclusions and assuming you are going a full on red team AI, I’d like to ask, are you doing more basic level penetration testing. For example, port scanning, service and version discovery, and exploit scanning? I think this would be great for DevSecOps teams. Bake it directly into the CI/CD pipeline for reviews.
While I agree, a full red team AI would probably get banned, I also know that is not necessarily the first or even the most important part of pen testing. Either way, I run a software development firm that does government contracting and did legacy migration for the eastern and western launch complexes for the Space Force. I know for a fact authorization to operate is a huge issue right now and dragging out their “Range of the Future” project which the space industry is relying on. So it is something that is extremely valuable, and I’d be willing to help out.
If you’d like, reach out to me in a DM and we can talk about your plugin. It’s very likely we can just get around the reliance of OpenAI by implementing it on other services. TBH, OpenAI doesn’t really have the time to seriously review these plugins with experts in every single domain. As you can see, half of the people in this thread are “omg hacker AI”, which could be the case, but nothing you have said has suggested that is indeed the case, but I suspect whoever reviewed it didn’t have enough information to make a good judgement call.
I don’t think the users of this forum are the sort to go down the “hacker AI” path, but they understand that the popular press are. Unfortunately, that means the likely response is going to be one of abundant caution during the introduction of AI, which for many of us is not a “new” thing, to the general public at scale.
Caution is fine, but us developers who build non-trivial plugins have to put in a lot of time, effort, and expense into building a plugin.
For OpenAI to reject a submission and not even provide the courtesy of a reasoning is just wrong.
Nevermind that sometimes they provide rejection reasons that aren’t documented in their “rules”. Or sometimes they will reject a plugin even though other plugins are already in the store doing the same functionality.
Many plugins in the store are therefore trivial, not much more than a single API wrapper (Earthquakes anyone?), or serve no purpose other than “lead generation” where a company just wants visibility in the store for the eyeballs. I’ve gone through all the hundreds of plugins myself. Most are junk.
Honest, hard innovation is being suppressed. OpenAI wants to ChatGPT to be Disney, where Disneyland is a safe place for all…but in the end it doesn’t really do anything useful b/c it is censored to death.
I understand where you are coming from, I think there are two main forces at work here, one is of perception, to us the devs on the outside looking in we see our app and a bunch of people we know on the forums, what we don’t see is the other thousands of people a day submitting plugins and a limited number of reviewers looking after that stream, I would imagine a lot of it is keyword or even AI filtered just to try and keep up. The second one is the need to keep the disruptive technology as publicly friendly as possible, and that will include things like not going down the adult entertainment route and not engaging in activities with a perceived (if it not actual) high risk associated with them. In time I am sure attitudes will relax, but right now… I kind of get it.
Indeed, I have empathy for what it must be like on the reviewing side of the wall.
Yet I was an early iOS developer and the Apple store was far better in its initial days.
Sure the plugin review process can get better over time. But they are clearly understaffed and unprepared to deal with the volume of submissions.
I frankly think they should just close the plugin store and rethink their strategy. It’s pretty clear that the quality of accepted plugins is poor and they are afraid to let in non-trivial plugins unless they are from trusted partners.
In the end, a poorly functioning review process just makes developers mad and wastes their time and resources.
My Code Runner Plugin also got rejected with no additional notes, and it was already approved previously and we had similar plugins that executes code.
Check my post about it