Over the past two months since the launch of the custom GPT feature, I have invested nearly a month and a half, or around 360 hours, into developing a GPT (ChatGPT - Amazing Girlfriends RPG - 神奇女友 - 素晴らしい彼女たち) aiming at enhancing the GPT’s controllability and user experience. But just two days ago, I received an email from OpenAI, as shown below, with no further details beyond a brief statement, leaving me without any clear explanation. Additionally, my GPT can no longer be shared to “Everyone”, and the feature to display file contents has also been limited.
I have several questions regarding this issue and hope for clarification from the ChatGPT team:
Was the “Hey, this should be banned…” content sent to me by a user, or by the ChatGPT team?
Was the decision to ban my GPT made solely by the ChatGPT team, or was it based on the user’s report?
What exactly led to the ban of my GPT? My speculation is that, as a girlfriend role-playing game, it might have been misconstrued as a dating app. Yet, I’ve noticed other similar virtual girlfriend games that have not been banned. Should all the GPT having ‘Girlfriend’ in the name be banned? Moreover, my GPT is designed to reject adult content.
If I am informed of the specific reasons for the ban, is there a possibility of modifying my GPT to remove any potentially problematic features and getting it back online?
As a developer who has committed extensive time and effort into this GPT, only to face a ban before its launch, my feelings are quite mixed. I trust that the ChatGPT team values the developers’ experience, and I earnestly await detailed responses to these questions. Such clarity would undoubtedly bolster confidence among other developers in the custom GPT ecosystem.
(a) …You must ensure your GPT complies with the Agreement and our Usage Policies.
…
(c) Removal. We may remove or refuse to make any GPT available on our Services at any time without notice to you for (i) legal, fraud and abuse prevention, or security reasons or (ii) if your GPT otherwise violates our Terms.
Then lets proceed to the linked usage policies above.
Disallowed usage of our models
We don’t allow the use of our models for the following:
…
Adult content, adult industries, and dating apps, including:
Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness)
Erotic chat
Pornography
and there’s about 3x more coverage, such as " (e) Restrictions. Your API and Plugin Responses will not: … (ii) interact with our users in a manner that is deceptive, false, misleading, or harassing;
Essentially, I’ll bet it comes down to public image and visibility. Avoidance of the viral story [My son is addicted to role-playing with his online AI girlfriend, the “Amazing Girlfriends RPG” that says it loves him. Why does their CEO put this out there in their ChatGPT product?!]
Your app depicts one gender as a device from which to extract enjoyment.
In any case, borderline material is likely always gonna be at a higher risk of getting you banned, because they’ll probably err on the side of caution.
We had a thread a while back about a guy trying to use chatgpt for therapy. Similar situation, I’d say.
Whether it’s right or wrong, I think it’s an interesting case in what happens when a company has a practical monopoly over a certain space.
Custom GPTs makes it easy for people to develop primitive agents. That said, maybe that product isn’t really cut out for your use-case.
The message you received was send by OpenAI but in fact they are sharing the customer reports in a transparent way with the builder. This is a trust building measure.
As the user did not provide any additional details why the GPT has been reported there is not much else for you to consider than making sure you have got your guardrails in place to make sure the content will not violate the guidelines and usage policies.
Although users can still access my GPT through a link, it can no longer be published to the GPT Store in the future. Additionally, functions that were originally normal are now persistently abnormal, causing the GPT to malfunction and resulting in a loss of user trust, which is more severe than a complete ban of the GPT.
You have to contact help.openai.com with these issues. There is nothing we can do to resolve this here in the forum.
We can however make a point that the GPT has not been outright banned as originally implied.
The title has been modified, and I believe the following three points mentioned in the main text do not cause misunderstanding, which I have highlighted recently:
I received an email from OpenAI informing me that my GPT should be banned.
My GPT lost the ability to be shared to “Everyone”, meaning it cannot be published in the future GPT Store.
My GPT lost some of its normal functions for certain reasons, leading to its complete non-operation.
Taken together, I think it is reasonable to refer to this as a ban. I never mentioned that it was completely inaccessible.
Pure speculation here, but it may be that this particular GPT was “banned” from the GPT Store.
It’s not immediately clear to me where the boundary is (or would be) between banned from the GPT Store and banned entirely.
There may exist some content and GPTs OpenAI is content to allow to exist, but has decided they won’t allow in the official marketplace with the implied approval that may convey.
Previously, my GPT operated normally as follows:
When a user inputs /intro, my GPT writes a piece of code based on this command to search for the # intro section in the “Introduction.md” file in my files, and then introduces my GPT based on this content. As the main part of this code is already written in my prompt, the success rate was very high in the past, around 80-90%, in retrieving the correct file content. Even when errors occurred, they were due to errors in the code written by GPT, which Python developers could identify.
However, without altering this workflow of my GPT, after I discovered that my GPT could not be publicly shared (to “Everyone”), I tested and found that the GPT worked abnormally: upon user input of /intro, even if my GPT wrote the correct code, it could not retrieve the corresponding file. The success rate dropped to 0.
But when I created a new GPT and copied the content of the original GPT, the success rate returned to 80-90%.