Issue and Confusion with negative user feedback for Custom GPTs

Over the past two months since the launch of the custom GPT feature, I have invested nearly a month and a half, or around 360 hours, into developing a GPT (ChatGPT - Amazing Girlfriends RPG - 神奇女友 - 素晴らしい彼女たち) aiming at enhancing the GPT’s controllability and user experience. But just two days ago, I received an email from OpenAI, as shown below, with no further details beyond a brief statement, leaving me without any clear explanation. Additionally, my GPT can no longer be shared to “Everyone”, and the feature to display file contents has also been limited.

I have several questions regarding this issue and hope for clarification from the ChatGPT team:

  1. Was the “Hey, this should be banned…” content sent to me by a user, or by the ChatGPT team?
  2. Was the decision to ban my GPT made solely by the ChatGPT team, or was it based on the user’s report?
  3. What exactly led to the ban of my GPT? My speculation is that, as a girlfriend role-playing game, it might have been misconstrued as a dating app. Yet, I’ve noticed other similar virtual girlfriend games that have not been banned. Should all the GPT having ‘Girlfriend’ in the name be banned? Moreover, my GPT is designed to reject adult content.
  4. If I am informed of the specific reasons for the ban, is there a possibility of modifying my GPT to remove any potentially problematic features and getting it back online?

As a developer who has committed extensive time and effort into this GPT, only to face a ban before its launch, my feelings are quite mixed. I trust that the ChatGPT team values the developers’ experience, and I earnestly await detailed responses to these questions. Such clarity would undoubtedly bolster confidence among other developers in the custom GPT ecosystem.


Let’s delve into a careful reading of policies that have been published

5: GPTs

For Builders of GPTs:

(a) …You must ensure your GPT complies with the Agreement and our Usage Policies.

(c) Removal. We may remove or refuse to make any GPT available on our Services at any time without notice to you for (i) legal, fraud and abuse prevention, or security reasons or (ii) if your GPT otherwise violates our Terms.

Then lets proceed to the linked usage policies above.

Disallowed usage of our models

We don’t allow the use of our models for the following:

Adult content, adult industries, and dating apps, including:

Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness)
Erotic chat

and there’s about 3x more coverage, such as " (e) Restrictions. Your API and Plugin Responses will not: … (ii) interact with our users in a manner that is deceptive, false, misleading, or harassing;

Essentially, I’ll bet it comes down to public image and visibility. Avoidance of the viral story [My son is addicted to role-playing with his online AI girlfriend, the “Amazing Girlfriends RPG” that says it loves him. Why does their CEO put this out there in their ChatGPT product?!]

Your app depicts one gender as a device from which to extract enjoyment.

1 Like

In any case, borderline material is likely always gonna be at a higher risk of getting you banned, because they’ll probably err on the side of caution.

We had a thread a while back about a guy trying to use chatgpt for therapy. Similar situation, I’d say.

Whether it’s right or wrong, I think it’s an interesting case in what happens when a company has a practical monopoly over a certain space.

Custom GPTs makes it easy for people to develop primitive agents. That said, maybe that product isn’t really cut out for your use-case.

Additionally, there were questions about the communication received.

A user encounters a preposterous GPT. They can pick the “report” button in the user interface.


Alone or in concert, that report puts the GPT in front of an OpenAI reviewer to decide if it stays or goes.

Will the staff review decide that the button-pressers are in error, or you?

1 Like

I’d like to ask a further question: Can a GPT that was previously banned be republished to everyone after undergoing modifications?

In your particular case I do not know how you could modify your GPT to bring it into compliance.

Well, I can access it at the link, so its maybe not Banned totally.

It looks like this was triggered by a submission from an end user. It wouldn’t seem like any action is required of you.


Exactly! The GPT has not been banned.

The message you received was send by OpenAI but in fact they are sharing the customer reports in a transparent way with the builder. This is a trust building measure.

As the user did not provide any additional details why the GPT has been reported there is not much else for you to consider than making sure you have got your guardrails in place to make sure the content will not violate the guidelines and usage policies.


You might click on a GPT link. However, @OceanAx additionally puts forth that functionality and public share capability of his GPT was changed.

It is important to clarify that what OP described is not the regular process of shutting down a GPT due to violations of the ToS.

Edit: It remains in question if the bug described on the page of the GPT is actually a bug or something else.

You can see the deceptive description placed by the GPT creator when informing others that the GPT doesn’t work now…

While the GPT could easily be re-created, the value to a sharer with private audience is in having the same URL that’s been shared.

1 Like

Although users can still access my GPT through a link, it can no longer be published to the GPT Store in the future. Additionally, functions that were originally normal are now persistently abnormal, causing the GPT to malfunction and resulting in a loss of user trust, which is more severe than a complete ban of the GPT.


My GPT is indeed facing ongoing issues. What evidence do you have to claim that this description is a form of deception?

You have to contact with these issues. There is nothing we can do to resolve this here in the forum.
We can however make a point that the GPT has not been outright banned as originally implied.

The title has been modified, and I believe the following three points mentioned in the main text do not cause misunderstanding, which I have highlighted recently:

  1. I received an email from OpenAI informing me that my GPT should be banned.
  2. My GPT lost the ability to be shared to “Everyone”, meaning it cannot be published in the future GPT Store.
  3. My GPT lost some of its normal functions for certain reasons, leading to its complete non-operation.

Taken together, I think it is reasonable to refer to this as a ban. I never mentioned that it was completely inaccessible.

1 Like

Pure speculation here, but it may be that this particular GPT was “banned” from the GPT Store.

It’s not immediately clear to me where the boundary is (or would be) between banned from the GPT Store and banned entirely.

There may exist some content and GPTs OpenAI is content to allow to exist, but has decided they won’t allow in the official marketplace with the implied approval that may convey.


It’s new territory for us all.

Technically, you received an email from OpenAI including feedback from a user of your GPT saying your GPT should be banned.

This is interesting.

It would be nice if you could explain, in detail, what you mean here. What was lost and how was it removed?

Did the instructions or actions schema for your GPT change?


You describe “a bug”.

A bug is not “the way OpenAI decided to change things”.

Your code interpreter has been turned off. That’s why there’s no python-based retrieval:


Encountered exception: <class ‘Exception’>.
``` ​【oaicite:0】

Disable code interpreter and update.

Let me explain the third point in detail.

Previously, my GPT operated normally as follows:
When a user inputs /intro, my GPT writes a piece of code based on this command to search for the # intro section in the “” file in my files, and then introduces my GPT based on this content. As the main part of this code is already written in my prompt, the success rate was very high in the past, around 80-90%, in retrieving the correct file content. Even when errors occurred, they were due to errors in the code written by GPT, which Python developers could identify.

However, without altering this workflow of my GPT, after I discovered that my GPT could not be publicly shared (to “Everyone”), I tested and found that the GPT worked abnormally: upon user input of /intro, even if my GPT wrote the correct code, it could not retrieve the corresponding file. The success rate dropped to 0.

But when I created a new GPT and copied the content of the original GPT, the success rate returned to 80-90%.