Hello, I received an email today that my Open AI account has been terminated. This was the message: After a thorough investigation, we have determined that you or a member of your organization are using the OpenAI API in ways that violate our policies.
I have never even used Open AI, I’ve only queried ChatGPT 3.5 & 4.0. All of my interactions with GPT have been professional, never questionable content, never have even typed a curse word.
I know it will take weeks for them to respond but has anyone had a similar experience and how did you regain access to your account?
Does this mean OpenAI is expecting developers to somehow “pre-filter” questions that perhaps an end user (i.e. the public) has pasted into a query field? My app lets users enter questions to be asked. Is this not an acceptable use? I can’t really control what my users ask.
I’m hoping an OpenAI employee can answer this. I probably should post it as a top level question, because it affects everyone in the world. All API developers I mean.
So many questions entered my mind when I got the email. If I can’t even query ChatGPT with 100% clean content, literally 99.9% of it is business, finance, or brand related, who else could possibly use it without a termination? On top of it, I’ve never even used the Open AI API! What possibly could have gotten me banned? I went on reddit and ran down a thread where people literally are typing the most grotesque things possible into it, or asking it how to make meth (seriously), and they have not been terminated, but I have?
I know this wasn’t the case with you, but if it’s true that “bad questions” (illegal, immoral, etc) passed in thru the API can get accounts banned, that would basically destroy the entire business model of OpenAI, because it would mean all anyone would have to do, to get my entire business shut down, is just to use my service to feed some illegal questions thru to ChatGPT.
Surely this is not the case. This would make OpenAI mostly unusable for any kind of “open ended” inputs from customers.
I’d politely disagree with your understanding of what the Moderation API is for. I think they are intending for it to be usable by the API developers if the API developers want to determine if some content is going to be “bad”, so the apps using the API can make use of that information.
That’s very different from them saying “All OpenAI API requests must be pre-filtered to avoid risk of account termination.”
I read that exact paragraph and it seems like it’s information to be able to provide to MY customers why their queries might be getting rejected, if they do get rejected.
If there is a threat of account termination due to not filtering 100% of messages thru that endpoint first they would need to explicitly state that in the policy.
EDIT: Also if what you were saying were true it would make sense to have an “onlyRunIfLegal” option on the main query endpoint. And they would have to specify: “Warning: Setting this flag to FALSE can result in your company being shut down.”. I just don’t see that being the case.
And there’s no way they require every query to be submitted TWICE to avoid risk of cancellation, when a flag like “onlyRunIfLegal” could accomplish the same thing.
OpenAI is a very new company. I agree it’s possible they ARE expecting every query to be submitted twice (first validate, then run) if coming from an end user, but if that’s the case it’s a very severe oversight, because all they need is an “onlyRunIfLegal” argument to to their API.
They definitely double their revenue by causing every query to be submitted twice, like that, so it’s definitely not out of the question that they do what benefits them the most financially.
Ok, I didn’t notice that free part. Thanks. Good to know.
I just believe OpenAI is validating 100% of API calls whether I do it myself or not.
So for them to be canceling accounts because the developer himself didn’t call the moderation API first would just be a real tricky “gotcha game” and I don’t think they’d do that. Not after reading this thread anyway! haha.
However, I admit, you did convince me to disable OpenAI from my product until I add the moderation endpoint to EVERY call (to be safe)…pending clarification from OpenAI themselves on this, because I don’t consider it settled in mind.
I read all those moderation links very carefully and it all looks like it’s provided as “A Benefit to the Developers” (i.e. solely for the benefit of OpenAI’s customers), and the Moderation Endpoint is in no way “To keep you out of trouble with OpenAPI, because you didn’t pre-filter input from users.”
All that being said, there’s likely also something akin to a social credit score on every account, and so I might as well call the Moderation endpoint just as a show of good faith, AND because I can go ahead and tell my users if they are doing something OpenAI doesn’t want, before even running the query.
And I was the one who said there is NOT a “gotcha game”, btw, just to be clear.
The rules and consequences have always been something that have warranted extra attention when working with OpenAI. This was even more so in the past when the rules where stricter, the moderation API not as good, and the approval process for new applications (or accounts) was lengthier. I’ve always been terrified that a small oversight could lose me my account.
So in addition to the moderation api for pre-filtering, one of the things you should also do is pass on an identifier (non personal data) to OpenAI along with your query so that if there is ever any inappropriate content, then you can identity and ban the user that’s the source of it, rather than have your entire account banned. They support having user IDs for that very reason, and we implemented it as an additional measure of security against accidentally missing anything.
Received a response from OpenAI - they confirmed my account is not terminated and I should be able to use their services. Unfortunately, my account is still not working. Pretty grateful it took less than 24 hours to get a response and they’re actively engaged. Will continue to update.
Passing in the UserID from the DB is a great idea, thanks! I’ll also be keeping complete logs of every question asked, and notifying customers that we do that, and why.
At the very least, calling moderation beforehand would increase latency by a full round trip. Maybe our free or unauthenticated users have to wait for moderation results, but paid users we just send the prompt on to get moderated as part of the generation process. Ie only pre-filter based on trust level etc.