Hello everyone!
I’d like to share some recent experiences I’ve had on the GPT platform and hope to gain clarification or advice from the community.
Background:
I received an official warning while engaging with a custom DnD (Dungeons & Dragons) GPT scenario, which reportedly has over 100,000 users. Throughout my interaction, I barely typed anything at all—I simply selected numbered options to advance through predetermined story branches. Yet, despite this passive form of participation, I still received a formal warning. This leaves me confused: does selecting story options alone risk violating policies? I’m uncertain whether there’s something in the model’s internal content that triggered this, or how I can prevent it.
The O1-Pro Mode Experience:
In a separate incident, I interacted with the O1-Pro mode, and I went out of my way to remind O1 not to produce any disallowed content, hoping to avoid any issues. Despite my caution, I still received a warning and had my access to O1 restricted. My access was only restored after reaching out to the support team. This outcome is perplexing—if explicitly cautioning the model still leads to warnings, how can users ensure full compliance?
Account Issues and Ongoing Anxiety:
What’s more troubling is that my first account was permanently banned without explanation roughly one hour after the support team had assured me that there were no violations on my record. Previously, I was told that everything was fine, and then suddenly the account was banned for no clear reason. This contradictory action erodes my trust in the platform’s enforcement mechanisms. Now, I’m using a second account, but I’m constantly on edge, worried about another unexplained ban.
My Questions and Requests:
- Clear Violation Criteria:
Please clarify which aspects of user interactions (even if limited to selecting options) can lead to warnings or bans. How are responsibilities assigned when users have minimal input and rely mainly on preset branches? - Guidance on Risk Management:
If interacting with custom DnD or O1 scenarios carries hidden risks, what steps can users take to minimize the chance of being flagged? Is choosing numeric options alone still considered participation in potentially disallowed content? - Consistent and Reliable Enforcement:
Why was my account banned after being cleared just an hour before? I’d appreciate an explanation for these abrupt reversals and a more transparent and reliable appeals process. Fair and consistent enforcement would greatly help build trust.
Suggestions and Expectations:
- More Transparent Guidelines:
Providing detailed, concrete examples of compliant versus non-compliant content in role-play or story-based scenarios would help users navigate safely. - Warnings and Correction Opportunities:
Before issuing severe actions, offering a warning or guidance would give users a chance to correct their approach and avoid feeling blindsided. - Accessible Appeals and Communication Channels:
If warnings or bans occur, having a straightforward appeal process, with clear instructions on how to submit anonymized transcripts for review, would help users resolve misunderstandings.
I truly enjoy constructing virtual worlds and unfolding creative stories with GPT’s capabilities. However, these recent incidents have shaken my confidence. I sincerely hope to receive some official clarification or helpful insights from the community. With more transparency and consistency, I believe we can restore trust and help everyone enjoy the platform safely and creatively.
Thank you for taking the time to read this. I look forward to any useful feedback or suggestions.