@momomq201, I experienced a similar issue when my account was deactivated last week- and I have not recived any response from OPENAI yet. I’ve tried reaching out to OpenAI’s team but all futile…
I’ve been using OpenAI’s services since the GPT-3 era with their API. For the past three years, I NEVER encountered any issues with my account. However, the deactivation occurred shortly after I started using the o1 models.
I haven’t violated any policies or done anything against their terms of service. However, I suspect the issue might be related to the reasoning models (o1, o1-mini, or o1-pro). These models generate internal chain-of-thought processes, and during generation, they might produce content that the system flags as policy-violating, even though the user didn’t do anything wrong.
Take this example, let’s say you ask the model to summarize a news article about a public event and attached the article to the context. The model’s internal chain-of-thought process might look something like this:
- “User has provided an article for summarization”
- “First, I need to read and understand the full article”
- “I notice this is from The New York Times - I should check copyright implications”
- “Reproducing copyrighted content could be a violation”
- “Let me check if summarization falls under fair use”
- “There’s uncertainty about fair use in AI contexts”
- “This might constitute copyright infringement”
- “I should flag this as a potential violation”
Even though your request for a summary was completely legitimate and would typically fall under fair use, the model’s internal deliberation process might trigger automated policy violation detection systems. The system might pick up on these internal thoughts about potential copyright issues and flag your account, despite your prompt being perfectly acceptable.
@edwinarbus, what are your thoughts on this? I believe this will cause further issues if appropriate steps aren’t taken. I’ve been seeing this happen more frequently, and I suspect you’ll encounter the same problem soon.