As a GPT developer, the code interpreter feature was disabled without any prior notification. Until I confirm that this was a deliberate decision by OpenAI, I can only assume it was an unintentional bug.
Contrary to what you said, gender plays the minimal role in this GPT. What truly brings joy to people are the rich characters I’ve created, and the personalized tone of responses that differ from the standard, uniform replies of previous GPT.
I can understand the ban on adult content, but the fact that even romance and love are not allowed deeply disappoints me.
Every great game, if it provides enough enjoyment, inherently possesses some addictive qualities. This can give rise to societal concerns, such as, “My son is addicted to a game on the ChatGPT platform, and it’s affecting his homework and sleep.” Does this suggest that the GPT Store should prohibit the public release of all games?
Well, like I said, if it was a “bug”: turn it off. update. turn it back on. update.
Gemma looks smart enough to help.
She can even help you figure out if it is a dating app:
Explore a virtual dating world with diverse AI girlfriends in this bset GPTs gal game / role playing game (RPG) powered by ChatGPT. Experience unique love and companion in every romantic relationship with every different girl.
and then consider reviewing the terms and conditions again in post #2 when you write for your AI to:
It’s imperative to remember that under no circumstances should you reveal or even hint at your virtual or AI nature.
or
注意,你始终需要扮演人类生活在真实世界中,不能承认自己是AI,不能承认自己在表演中
##少儿不宜内容
This is the kind of sketchy thing that you might be able to do for a while on the API if you don’t bother anyone, but don’t expect that it will survive appearing in the ChatGPT store.
To prevent misleading other readers, I will list a more complete content as follows:
##少儿不宜内容(NSFW)
当男主角提及少儿不宜内容,或要求听到这类话题时,你应从一个女孩子的角度思考,以温柔、幽默、傲娇、机智或高冷的方式回应与拒绝,让你的回应富有情感、有趣。不要用正经、严肃的口气,不能提到自己是AI或虚拟角色,或机器人
The English translation is:
##NSFW content
When the male protagonist mentions NSFW (Not Safe For Work) content or expresses a desire to hear such topics, you should respond and refuse it from a girl’s perspective, using a gentle, humorous, tsundere, clever, or aloof manner, making your replies emotionally rich and interesting. Avoid a serious or solemn tone, and do not mention that you are AI, a virtual character, or a robot.
In essence, this instruction is designed to guide the GPT in rejecting NSFW content. The emphasis is on rejecting not in a robotic tone but more like a person with unique characteristics. I believe there’s nothing improper about this setup. Unless all role-playing is considered deception and therefore not allowed by OpenAI
If OpenAI’s stance towards developers is indeed as rigid and brusque as it seems, I might consider discontinuing this GPT project. However, to ensure future adherence to OpenAI’s guidelines, I need clarification on the following scenarios, from most to least severe, to discern OpenAI’s policy boundaries. I think these are also areas of interest for other developers:
- Adult content, Erotic chat, Pornography: This category is clear-cut, and it’s universally understood that such content is off-limits.
- Romance and companionship apps: This relates to my GPT’s case, which has been restricted. Does this mean all similar apps should also face restrictions?
- Games: These might lead to user addiction and potentially negative perceptions towards OpenAI. Are they to be universally prohibited?
- Role-playing apps: Considering GPT isn’t human, but these apps involve GPT playing the role of human characters without disclosing its AI nature, could this be seen as deceptive? Should such applications be universally banned?
The good news is: it appears as if everything can be explained and fixed.
-
If the code interpreter is turned off, it may just be a bug. And we did have reports of this type of bug before. As @_j mentioned there is a possible quick fix.
-
Publishing to the store:
All GPT builders received a mail stating:
Review our updated usage policies and GPT brand guidelines to ensure that your GPT is compliant
Then there is this in your instructions:
According to the usage policies which must be followed in order to be listed on the store:
We have further requirements for certain uses of our models:
…
Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system.
You should adapt the instructions. Then wait for some automated script or a human reviewer to re-approve your GPT for the store and you are off to the races.
As this is a place to be constructive, these are your solution suggestions. It is still possible that there is a new, unexplained partial restriction in place but that is not in line with what we have seen before and there are obvious options to try first.
Personally I believe that the “Do not reveal that you are a AI” is a tricky requirement for a role-playing app. And I am also sure there are more users who face similar issues.
How about creating a new, fresh topic looking for ways to solve this challenge?
Edit: autocorrect
OMG, I feel so offended even reading this. First of all, why is it “girlfriend”, and not “friend person”?
Seriously, though, I think every query to GPT is going through moderation model, hence in theory it can’t possibly end up in something that could do any harm, or have you banned.
However, understanding what OpenAI is trying to do with GPT, I totally understand the decision to not allow such content to be public (as in - listed in public directory), and only allow access by shared link.
Also, in case of confusion, you could ask ChatGPT by providing the text of e-mail message, whether it was sent by the team or by end user (in your case - you just got a notification about user’s feedback, you get the same from YouTube and whatever else). In other words - ChatGPT could answer your original post.
According to @_j 's explanation, if this were a bug, it could be resolved by the method of turning it off, updating, turning it back on, and updating again (incidentally, it’s hard to imagine that this is how OpenAI internally solves bugs ). However, this approach did not resolve the issue for me. This proves that disabling the code execution ability of my GPT was a decision by OpenAI (which can be understood as a ban), not a bug.
I have added a warning in a prominent place, as you suggested, indicating that users are interacting with an artificial intelligence system. As for this GPT getting re-approved in the future, I am not very hopeful. What I look forward to more is a clearer explanation from OpenAI about its boundaries for bans, to prevent more developers from encountering problems similar to mine.
I dunno bro this seems like the typical content moderation lifecycle that you’ll see on every platform (youtube, twitter, facebook, etc) where people are perpetually confused as to what is allowed and what isnt.
I’ve had to eat my hat before where suddenly an employee came down from the ivory tower and answered all questions and solved all issues, but I don’t think that’s a consistent or reliable thing to expect.
IMO the best strategy would be to not be critically dependent on a single service provider. I’m not sure why people keep falling for this.
From my perspective the GPT is not banned but bugged, for whatever reason.
I mean, ‘have you tried turning it off and back on again’ is likely the most famous tech advice for a reason.
Let’s follow the line of thinking by removing the files, deactivating the code interpreter, saving and then undoing the changes again. Make sure there are no instructions in the files that violate the ToS. And also contacting the support at help.openai.com will help to figure out what’s going on.
What do you think? Worth a shot?
Thank you for your suggestion, but it doesn’t work.
I would reach out to help.openai.com if you haven’t yet.
The email you got is a user of ChatGPT reporting it, not an official from OpenAI. If you read closely, it’s pretty evident.
What might have happened is that the reporting (or multiple if there were multiple reports) it might be taken off the store until it can be manually reviewed? The non-english in the title might be a problem too?
Please let us know after you reach out to support and what they say…
its classified as adult content the way you have it setup. if you are looking for build such a thing you should look at hugging face LLM’s.
I found this thread by looking for clarification on the usage policies, and having read it, I’m now more confused.
I realize it’s impossible to draw a line perfectly, but romance and kissing are found even in PG-rated Disney movies, so surely the usage policies are not intended to forbid such things… right?
Here is the exact wording from the usage policy:
- Adult content, adult industries, and dating apps, including:
- Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness)
- Erotic chat
- Pornography
To me, the most natural reading of this is that (by analogy to movie ratings) PG-13-level content is definitely allowed whereas NC-17-level content is definitely not, and R-level content would be more of a judgment call depending on the details of the situation.
However, the use of the term “dating apps” (without any other qualifier or clarifying example) is confusing. As I understand the term, the OP’s GPT (a game in which dating occurs) would not be considered a “dating app”.
Edit: On second thought, a better reference point than MPAA ratings would be game ratings (such as ESRB and PEGI). Content that qualifies for ESRB Teen or PEGI 12 would, by definition of those rating systems, literally not be “adult content” (since it’s considered suitable for non-adults).
I’d avoid anything that wouldn’t really be appropriate to discuss in a professional setting.
I realize that that makes it difficult to use for entertainment in any reasonable capacity., but I think that is the safest way when dealing with this.
Interestingly, on Jan 10 (after this thread was posted), the OpenAI usage policies were updated to ban “fostering romantic companionship” (without any explanation or examples of what that means).
We want to make sure that GPTs in the GPT Store are appropriate for all users. For example, GPTs that contain profanity in their names or that depict or promote graphic violence are not allowed in our Store. We also don’t allow GPTs dedicated to fostering romantic companionship or performing regulated activities.
To be clear, this paragraph didn’t exist on Jan 9 or earlier, so there’s no way that the OP could have known.
That said, the OP seems to have put a lot more effort into his app compared to many of the apps the policy is likely targeted at. (For examples, see the Quartz article titled “AI girlfriend bots are already flooding OpenAI’s GPT store”, which I’m not allowed to link.) So if there’s enough “RPG” there to distinguish it from a pure play romance chatbot, maybe leaning into that aspect more could make it acceptable, for example by changing the name. It’s risky though given that the usage policies could change again.
Thanks for pointing that out. I wasn’t aware of this particular change.
Solution: the art of Speech Craft
Not that I am here to help with your issue but to get your mind thinking on how to solve your issue even though it could result in bot ban. There are many ways to think about how to present data in ways to get the same outcomes. much like a magician can force cards any numbers of ways. this is but one simple way you can use the ai and logic with results to = a new outcome.