I have just received the news that the Trust and Safety Team did not approve my plugin with no explanation (after 7 days of silence). How is this possible? As I can see from my analytics, the team did not even perform a single action using my plugin.
I want to escalate this because I am not happy with the response from the support chat:
Hi there,
Thank you for submitting your ChatGPT plugin for review. We were unable to approve your plugin for one or more of the following reason(s):
Unfortunately, your plugin has not been approved by our Trust and Safety Team.
Thank you for being an early ChatGPT plugin developer and we look forward to reviewing your next submission.
My plugin is a pentest assistant that can perform both reconnaissance and light web vulnerability scanning and it can interpret the result using the GPT-4 knowledge. There are two important details here:
The plugin will not execute any action without the user giving their permission and authorization to scan that particular asset. (You are not supposed to scan anything that you do not have authorization to)
The actions performed are not intrusive (for example the web scanner will not bring the website down by performing too many requests).
I can understand that this use case may be particularly sensible, but from my perspective, I have followed all of the rules listed by OpenAI and took great care of the ethical aspect.
If anyone has input here, please let me know. I am really looking to bring this into the store in one form or the other.
I canāt speak for OpenAI, but I think that would light up all sorts of alarm bells. Just not a good look for an AI to be anywhere near trying to āhackā sites.
It really depends on the perspective. Our light scanner will never āhackā a website, but it will help you (as an owner or administrator) find some potential issues.
Also not speaking on behalf of OpenAI, but they tend to lean heavily on the cautious side. There are a lot of things on usage policies: Usage policies
If you look at the spirit of the policy, double edged things that can be used to both help and harm would likely be rejected. If itās something that finds potential issues and allows some users to exploit that issue, that sounds like something that would be rejected.
Iām surprised they didnāt just get GPT to write a proper rejection letter though.
We are launching the Cybersecurity Grant Programāa $1M initiative to boost and quantify AI-powered cybersecurity capabilities and to foster high-level AI and cybersecurity discourse.
I donāt know why youād think this would be approved. Even from a public relations perspective:
āPentest AI pluginā
āThis plugin allows the general public to use AI to efficiently probe non-consenting internet sites in order to gather information about vulnerabilities and exploits they can use against the targets.ā
I am facing the same situation. Our plugin was declined with no reason.
My plugin is a connector to our low-code platform service, so there is no danger to enable the plugin.
āHeadless ERP Generatorā is the plugin name.
I resubmitted the plugin, anyone from OpenAI could help us? @logankilpatrick
Yes my Code runner plugin was also rejected because of this security issues while i just changed URL of Plugin and re-submitted thats it they changed their policy and rejected it.
Their house, their rules. This isnāt a public utility, its a pre-alpha research platform. You HAVE to recognize that anything we make is on built on sand. We agree to all this in the TOS.
Iām fairly confident that the teams who built GPT know what theyāre doing with their own platform, better than any of us do.
For the Plugins team they need lot of improvements they take 2 weeks to accept plugin and rejected for whatever reason they want so yes i know their plugin team doesnāt know right now what they are doing, not talking about all the teams though
haseeb_heaven is saying that their plugin was already approved and in the store, then got rejected upon re-submission after only changing a URL.
The store rules are not fully documented and definitely not being applied consistently.
Zapierās plugin, for example, can seemingly do functionality that no other plugin can do via a dynamic OAS file. When you start a conversation with the zapier plugin enabled, they must be injecting the enabled actions into that file b/c if you look at the zapier plugin debug windows the endpoints have UIDās. Not to mention that the URL to the zapier OAS file has the word ādynamicā in its path as a clue. Other developers who have changed their OAS file have been kicked out of the store.
I donāt know how else to explain this: Plugin terms
OpenAI can do whatever they want with the things weāre building on their platform. The landscape has changed since most of these plugins were accepted into the store. If anyone has a plugin that doesnāt fit the terms and resubmits, itās their prerogative to reject it.
With the release of Code Interpreter, OpenAI is most likely covering their liability bases at this point. Not allowing additional ways for users to direct ChatGPT to write and execute code makes sense. They can only really vet their own security and dontā want to be held liable if some 3rd party developer jailbreaks something and claims they could only do with the use of ChatGPT.
I have a question about people getting kicked from the store and the Zapier claimā¦ Their docs say that the only thing that will get you kicked out is if your ai-plugin.json manifest is changed.
I havenāt seen proof that making a change to ONLY your OAS file kicks you from the store, and Ive changed mine without effect.
Zapier doesnāt have any special permissions, they are following the rules and framework according to the Docs. From what I gather, they are using a OAS generator that creates an unique spec file for the specific user based on what zaps they are using. If there is a threshold for OAS changes before it affects GPTās interaction with the plugin, then Iām sure Zapier knows where it is.
The way these very normal enforcements of policy are being represented as unfair or negligent isnāt accurate nor constructive.
Iām not saying that the audacity of this predicament doth not diverge so remarkably from our aspirations, and that people donāt deserve to be mad about the changes in the environment, but lets stop trying to rally pitchforks around false narratives.