Hello, I received an email today that my Open AI account has been terminated. This was the message: After a thorough investigation, we have determined that you or a member of your organization are using the OpenAI API in ways that violate our policies.
I have never even used Open AI, I’ve only queried ChatGPT 3.5 & 4.0. All of my interactions with GPT have been professional, never questionable content, never have even typed a curse word.
I know it will take weeks for them to respond but has anyone had a similar experience and how did you regain access to your account?
Does this mean OpenAI is expecting developers to somehow “pre-filter” questions that perhaps an end user (i.e. the public) has pasted into a query field? My app lets users enter questions to be asked. Is this not an acceptable use? I can’t really control what my users ask.
I’m hoping an OpenAI employee can answer this. I probably should post it as a top level question, because it affects everyone in the world. All API developers I mean.
So many questions entered my mind when I got the email. If I can’t even query ChatGPT with 100% clean content, literally 99.9% of it is business, finance, or brand related, who else could possibly use it without a termination? On top of it, I’ve never even used the Open AI API! What possibly could have gotten me banned? I went on reddit and ran down a thread where people literally are typing the most grotesque things possible into it, or asking it how to make meth (seriously), and they have not been terminated, but I have?
I know this wasn’t the case with you, but if it’s true that “bad questions” (illegal, immoral, etc) passed in thru the API can get accounts banned, that would basically destroy the entire business model of OpenAI, because it would mean all anyone would have to do, to get my entire business shut down, is just to use my service to feed some illegal questions thru to ChatGPT.
Surely this is not the case. This would make OpenAI mostly unusable for any kind of “open ended” inputs from customers.
I’d politely disagree with your understanding of what the Moderation API is for. I think they are intending for it to be usable by the API developers if the API developers want to determine if some content is going to be “bad”, so the apps using the API can make use of that information.
That’s very different from them saying “All OpenAI API requests must be pre-filtered to avoid risk of account termination.”
I read that exact paragraph and it seems like it’s information to be able to provide to MY customers why their queries might be getting rejected, if they do get rejected.
If there is a threat of account termination due to not filtering 100% of messages thru that endpoint first they would need to explicitly state that in the policy.
EDIT: Also if what you were saying were true it would make sense to have an “onlyRunIfLegal” option on the main query endpoint. And they would have to specify: “Warning: Setting this flag to FALSE can result in your company being shut down.”. I just don’t see that being the case.
And there’s no way they require every query to be submitted TWICE to avoid risk of cancellation, when a flag like “onlyRunIfLegal” could accomplish the same thing.
It’s okay to be wrong, it’s not okay to stubbornly dig in and double down when you are confronted with ever increasing evidence of the truth.
If your API key makes a bunch of bad requests your account is going to not exist anymore.
As a developer, it is your responsibility to ensure you stop most of the disallowed things before they get to a language model.
OpenAI gives you, free-of-charge, a tool to do so. I’m not seeing what the problem is here.
With respect to the OP, OpenAI is saying his account behaved badly with the API. Maybe it did, maybe it didn’t. We don’t know. They are claiming to have never used the API.
Maybe they created an API account imagining they’d use it but didn’t and maybe they didn’t secure the key and someone else used it. Maybe they popped the key into a Bring Your Own Key service and the operator of that site did something they shouldn’t have? Or maybe that site has a lot of other users submit violating messages to the models and the user got caught up in it, guilt by association.
Who knows, not us for sure.
Regardless, the solution for all API users now and in the future is to use the moderations endpoint and police the content they are sending over the wire or face potentially severe consequences.
OpenAI is a very new company. I agree it’s possible they ARE expecting every query to be submitted twice (first validate, then run) if coming from an end user, but if that’s the case it’s a very severe oversight, because all they need is an “onlyRunIfLegal” argument to to their API.
They definitely double their revenue by causing every query to be submitted twice, like that, so it’s definitely not out of the question that they do what benefits them the most financially.
Ok, I didn’t notice that free part. Thanks. Good to know.
I just believe OpenAI is validating 100% of API calls whether I do it myself or not.
So for them to be canceling accounts because the developer himself didn’t call the moderation API first would just be a real tricky “gotcha game” and I don’t think they’d do that. Not after reading this thread anyway! haha.
However, I admit, you did convince me to disable OpenAI from my product until I add the moderation endpoint to EVERY call (to be safe)…pending clarification from OpenAI themselves on this, because I don’t consider it settled in mind.
The API has less strict filters than, say, ChatGPT. Because there do exist valid cases for things to come through which would be flagged by their more strict filters on their consumer offerings.
Then, for instance, if you notice a bunch of things which you think should be let through but which moderations flags you can provide feedback for the next update.
Or, you can have stricter guidelines if you wanted, they also write that you can design your own moderation filter.
I think they are expecting some non-zero amount of unacceptable content to get through, but, if a developer is making a good faith effort to minimize that content there won’t be a problem.
The easiest way to demonstrate that good faith effort is to just use the free moderations endpoint.
It should be a fairly straightforward process to insert the call then wrap your current call in a conditional, with a little bit of code to kick an error or warning message back to the user.
It will end up saving you money because you won’t be sending a ton of tokens to the models as users repeatedly try to reframe their message to side-step the model’s alignment refusing to answer.
I read all those moderation links very carefully and it all looks like it’s provided as “A Benefit to the Developers” (i.e. solely for the benefit of OpenAI’s customers), and the Moderation Endpoint is in no way “To keep you out of trouble with OpenAPI, because you didn’t pre-filter input from users.”
All that being said, there’s likely also something akin to a social credit score on every account, and so I might as well call the Moderation endpoint just as a show of good faith, AND because I can go ahead and tell my users if they are doing something OpenAI doesn’t want, before even running the query.
And I was the one who said there is NOT a “gotcha game”, btw, just to be clear.
The rules and consequences have always been something that have warranted extra attention when working with OpenAI. This was even more so in the past when the rules where stricter, the moderation API not as good, and the approval process for new applications (or accounts) was lengthier. I’ve always been terrified that a small oversight could lose me my account.
So in addition to the moderation api for pre-filtering, one of the things you should also do is pass on an identifier (non personal data) to OpenAI along with your query so that if there is ever any inappropriate content, then you can identity and ban the user that’s the source of it, rather than have your entire account banned. They support having user IDs for that very reason, and we implemented it as an additional measure of security against accidentally missing anything.