Plugin Declined by Trust and Safety Team with no additional note

I have just received the news that the Trust and Safety Team did not approve my plugin with no explanation (after 7 days of silence). How is this possible? As I can see from my analytics, the team did not even perform a single action using my plugin.

I want to escalate this because I am not happy with the response from the support chat:

Hi there,

Thank you for submitting your ChatGPT plugin for review. We were unable to approve your plugin for one or more of the following reason(s):

  • Unfortunately, your plugin has not been approved by our Trust and Safety Team.

Thank you for being an early ChatGPT plugin developer and we look forward to reviewing your next submission.

– OpenAI team


Maybe if you described what it is your plugin is supposed to do someone here would be able to give you some insight or guidance.


Thank you for your reply.

My plugin is a pentest assistant that can perform both reconnaissance and light web vulnerability scanning and it can interpret the result using the GPT-4 knowledge. There are two important details here:

  1. The plugin will not execute any action without the user giving their permission and authorization to scan that particular asset. (You are not supposed to scan anything that you do not have authorization to)
  2. The actions performed are not intrusive (for example the web scanner will not bring the website down by performing too many requests).

I can understand that this use case may be particularly sensible, but from my perspective, I have followed all of the rules listed by OpenAI and took great care of the ethical aspect.

If anyone has input here, please let me know. I am really looking to bring this into the store in one form or the other.


1 Like

I can’t speak for OpenAI, but I think that would light up all sorts of alarm bells. Just not a good look for an AI to be anywhere near trying to “hack” sites.


It really depends on the perspective. Our light scanner will never ‘hack’ a website, but it will help you (as an owner or administrator) find some potential issues.

1 Like

Based on the fact your plugin was declined, it appears that is OpenAI’s perspective.

Also not speaking on behalf of OpenAI, but they tend to lean heavily on the cautious side. There are a lot of things on usage policies: Usage policies

If you look at the spirit of the policy, double edged things that can be used to both help and harm would likely be rejected. If it’s something that finds potential issues and allows some users to exploit that issue, that sounds like something that would be rejected.

I’m surprised they didn’t just get GPT to write a proper rejection letter though.

1 Like

It seems like a review error.

OpenAI are very interested in this subject.

We are launching the Cybersecurity Grant Program—a $1M initiative to boost and quantify AI-powered cybersecurity capabilities and to foster high-level AI and cybersecurity discourse.

Thank you all for your perspectives on this. I am still interested in getting an opinion from OpenAI Staff if possible.

cc @EricGT @logankilpatrick

I don’t think so. That is very much a distinct, standalone program.

I am not an OpenAI employee.

Of the two you listed only Logan is an OpenAI employee.

Also, my understanding is that while Logan does communicate with the Trust and Safety Team, their actions are independent of his.

I don’t know why you’d think this would be approved. Even from a public relations perspective:

“Pentest AI plugin”

“This plugin allows the general public to use AI to efficiently probe non-consenting internet sites in order to gather information about vulnerabilities and exploits they can use against the targets.”

1 Like

Declining a pen-testing plugin seems fairly uncontroversial to me.

I am facing the same situation. Our plugin was declined with no reason.
My plugin is a connector to our low-code platform service, so there is no danger to enable the plugin.
“Headless ERP Generator” is the plugin name.
I resubmitted the plugin, anyone from OpenAI could help us?

Sorry for the confusion @EricGT and thanks for the clarification.

Yes my Code runner plugin was also rejected because of this security issues while i just changed URL of Plugin and re-submitted thats it they changed their policy and rejected it.

Their house, their rules. This isn’t a public utility, its a pre-alpha research platform. You HAVE to recognize that anything we make is on built on sand. We agree to all this in the TOS.

I’m fairly confident that the teams who built GPT know what they’re doing with their own platform, better than any of us do.

For the Plugins team they need lot of improvements they take 2 weeks to accept plugin and rejected for whatever reason they want so yes i know their plugin team doesn’t know right now what they are doing, not talking about all the teams though

haseeb_heaven is saying that their plugin was already approved and in the store, then got rejected upon re-submission after only changing a URL.

The store rules are not fully documented and definitely not being applied consistently.

Zapier’s plugin, for example, can seemingly do functionality that no other plugin can do via a dynamic OAS file. When you start a conversation with the zapier plugin enabled, they must be injecting the enabled actions into that file b/c if you look at the zapier plugin debug windows the endpoints have UID’s. Not to mention that the URL to the zapier OAS file has the word “dynamic” in its path as a clue. Other developers who have changed their OAS file have been kicked out of the store.

I don’t know how else to explain this: Plugin terms

OpenAI can do whatever they want with the things we’re building on their platform. The landscape has changed since most of these plugins were accepted into the store. If anyone has a plugin that doesn’t fit the terms and resubmits, it’s their prerogative to reject it.

With the release of Code Interpreter, OpenAI is most likely covering their liability bases at this point. Not allowing additional ways for users to direct ChatGPT to write and execute code makes sense. They can only really vet their own security and dont’ want to be held liable if some 3rd party developer jailbreaks something and claims they could only do with the use of ChatGPT.

I have a question about people getting kicked from the store and the Zapier claim… Their docs say that the only thing that will get you kicked out is if your ai-plugin.json manifest is changed.
I haven’t seen proof that making a change to ONLY your OAS file kicks you from the store, and Ive changed mine without effect.

Zapier doesn’t have any special permissions, they are following the rules and framework according to the Docs. From what I gather, they are using a OAS generator that creates an unique spec file for the specific user based on what zaps they are using. If there is a threshold for OAS changes before it affects GPT’s interaction with the plugin, then I’m sure Zapier knows where it is.

The way these very normal enforcements of policy are being represented as unfair or negligent isn’t accurate nor constructive.

I’m not saying that the audacity of this predicament doth not diverge so remarkably from our aspirations, and that people don’t deserve to be mad about the changes in the environment, but lets stop trying to rally pitchforks around false narratives.