Banned with no communication/transparency even after implementing moderation endpoint

In a previous post I was wondering why my webpage was glitched and just showed me a blank page with an unusable UI. It wasn’t until I checked developer logs I saw that I was banned. Also it turns out my “ChatGPT account” and “API account” are considered the same for this ban. So my first question is why ban someone in this sneaky manner which wastes developer/user time? Why not just say “your account was banned” at the top of the page?

Moreover, the error message claims they sent me an email when banning me, but the only email they ever sent was one informing me of a content policy violation a month ago. Immediately after I received that email, I implemented their recommended “moderation endpoint” to prevent bad prompts being sent to their service. I emailed them three times and they IGNORED me all 3 times. Moreover, the user ID of the content violation wasn’t even in my entire database of users, suggesting that the violation detection was a bug/error on OpenAI’s side.

What is the point of implementing the moderation endpoint check if I’ll just get banned anyway? It would seem that it was just a waste of time to do that. I’m also curious now whether, if a prompt passes their moderation service, is it possible for it to be a false negative and still trigger some sort of flag when sent to the text generation service? If so, then that’s a bad system, especially coupled with the complete lack of communication where developers have zero transparency after the first warning email about whether they’re still in violation of content policies. Does anyone know the answer to my last question?

you’re email leaked in the images you’ve provided, maybe the keys got leaked too?

edit: sorry, just read this and I don’t mean to sound mean. I’m sure they’ll get back to you, it just might take some time.

Thank you for your thoughts; however I already checked the screenshot, and I don’t consider my email to be private information so it was not a leak. It is very unlikely my API key leaked because it was only ever in files that are private to myself so if it did leak then the attacker would’ve had access to a lot of my passwords, bank accounts etc, but it remains a slight possibility.

ah, ok, I see.

so when using the moderation endpoint, how did you use it? if it hit the moderation endpoint, did you only proceed to a request to a non moderation model after making sure it passed the check?

Yes, that’s right. That’s why I asked the question above: Is it possible for the moderation endpoint to say something passed, but actually later in the text generation it produced something bad? (false negative)

Another possibility is this: When OpenAI sent me the warning email, they cited a specific category of violation (sex with minors). So, when I implemented the moderation endpoint, I only blacklisted that category. What if, after it was implemented, I got flagged for a different category, and then get a ban with no warning? In this case it’s still a very poor developer experience with lack of communication. Also, my game was already sending tons of sexual and violent content (it is a role-playing simulator), long before I got the message about content violation, so I doubt it is any “normal” category such as regular sex or regular violence.