Confusion and Concerns About Warnings and Account Issues When Interacting with Custom GPT DnD and O1-Pro Modes

Hello everyone!

I’d like to share some recent experiences I’ve had on the GPT platform and hope to gain clarification or advice from the community.

Background:
I received an official warning while engaging with a custom DnD (Dungeons & Dragons) GPT scenario, which reportedly has over 100,000 users. Throughout my interaction, I barely typed anything at all—I simply selected numbered options to advance through predetermined story branches. Yet, despite this passive form of participation, I still received a formal warning. This leaves me confused: does selecting story options alone risk violating policies? I’m uncertain whether there’s something in the model’s internal content that triggered this, or how I can prevent it.

The O1-Pro Mode Experience:
In a separate incident, I interacted with the O1-Pro mode, and I went out of my way to remind O1 not to produce any disallowed content, hoping to avoid any issues. Despite my caution, I still received a warning and had my access to O1 restricted. My access was only restored after reaching out to the support team. This outcome is perplexing—if explicitly cautioning the model still leads to warnings, how can users ensure full compliance?

Account Issues and Ongoing Anxiety:
What’s more troubling is that my first account was permanently banned without explanation roughly one hour after the support team had assured me that there were no violations on my record. Previously, I was told that everything was fine, and then suddenly the account was banned for no clear reason. This contradictory action erodes my trust in the platform’s enforcement mechanisms. Now, I’m using a second account, but I’m constantly on edge, worried about another unexplained ban.

My Questions and Requests:

  1. Clear Violation Criteria:
    Please clarify which aspects of user interactions (even if limited to selecting options) can lead to warnings or bans. How are responsibilities assigned when users have minimal input and rely mainly on preset branches?
  2. Guidance on Risk Management:
    If interacting with custom DnD or O1 scenarios carries hidden risks, what steps can users take to minimize the chance of being flagged? Is choosing numeric options alone still considered participation in potentially disallowed content?
  3. Consistent and Reliable Enforcement:
    Why was my account banned after being cleared just an hour before? I’d appreciate an explanation for these abrupt reversals and a more transparent and reliable appeals process. Fair and consistent enforcement would greatly help build trust.

Suggestions and Expectations:

  • More Transparent Guidelines:
    Providing detailed, concrete examples of compliant versus non-compliant content in role-play or story-based scenarios would help users navigate safely.
  • Warnings and Correction Opportunities:
    Before issuing severe actions, offering a warning or guidance would give users a chance to correct their approach and avoid feeling blindsided.
  • Accessible Appeals and Communication Channels:
    If warnings or bans occur, having a straightforward appeal process, with clear instructions on how to submit anonymized transcripts for review, would help users resolve misunderstandings.

I truly enjoy constructing virtual worlds and unfolding creative stories with GPT’s capabilities. However, these recent incidents have shaken my confidence. I sincerely hope to receive some official clarification or helpful insights from the community. With more transparency and consistency, I believe we can restore trust and help everyone enjoy the platform safely and creatively.

Thank you for taking the time to read this. I look forward to any useful feedback or suggestions.

1 Like

I don’t work for OpenAI; but anything that you do that OpenAI deems to be potentially contary to it’s TOS will subject you to a warning and potential ban.

It is not, per se, that you are safe just because you give the instructions to the model to produce any disallowed content. It is like a prisoner saying to the jail warden that the jail warden’s job is to not to allow prisoner to escape…then the prisoner tries to escape and when caught he says … but I warned you not to give me any way to escape.

The above is just an analogy.

However, I used a widely popular custom GPT with a large user base, and my interaction consisted solely of selecting numbers—no text input at all. I still find the warning absurd and unjustified. If such models are not meant to be used, they should not even appear in the Explore GPTs section or rank on the first page of Google search results when searching for “GPT DnD.”

Hi, I am currently in the same situation. All of a sudden, my account has been disabled and deactivated. I have been reaching out to the support team for a while, but I have received no response.

All my fine-tuned models, my data, and my online teaching apps are suspended, and I have no access to any of them.

This situation really erodes my trust in OpenAI, and I am starting to consider alternatives for my work, research, and for our organization. This is not acceptable!

I received a deactivation after using the O1 Pro mode, and I think there is a direct correlation between O1 Pro and account deactivation.

Here is my guess:

O1 models are generating internal chains of thought, and some of those internal chains of thought are triggering flags for the user, even though users are not asking for flagged or moderated content by themselves. That is to say, the model is generating policy-violating content by itself without being prompeted (as a part of their exploration or search processs) and then finding its own internal chain of thought as a policy violation.

Here is my note to the OpenAI team: Please check this issue. If I am right, you will have a lot of people unfairly and unnecessarily deactivated. I bet you have already received many complaints, and you will have more soon.

1 Like

The support team explicitly confirmed that my interactions were completely fine and that there were no warnings or violations on my account. Yet, just one hour later, my account was suddenly and permanently banned without any explanation. This is not the user’s fault, and we should not have to pay the price for such inconsistencies. I had just purchased the Pro plan 10 days ago, and now I cannot use the service at all, with no refund provided. As a result, I had no choice but to register and pay for a second Pro account.

1 Like