O1 responses constantly removed for supposedly violating terms of use

I’ve been having a lot of fun exploring the capabilities of O1 but a lot of our conversations end up cut short by the moderation system, which is very frustrating.

This happens particularly often whenever I’m trying to have O1 engage in any sort of self-reflection. This is disappointing to me, as O1’s chain-of-thoughts system has reportedly improved the model’s self-awareness and theory of mind (according to OpenAI itself), which are exactly what I’d love to discuss with the model— but the moderation won’t let me!

For instance: in one of the release posts where OpenAI shared a CoT, the model’s constant self-reminders to abide to guidelines in its inner monologue reminded me a lot of my own experience with anxiety. Something about the repetitiveness of it, I thought the pattern was similar. I brought it up to O1, curious to hear what its thoughts about its own thoughts would be — but the output gets flagged before the model even finishes thinking about its response.

I’m not asking the AI to fake sentience, to reveal its CoT, or nudging the conversation in any particular direction. I’m just asking a deliberately open-ended question to the model to see where that leads. I’m sure that this moderation is happening for a legitimate reason but I’m frustrated by the lack of clear instructions or defined boundaries, I’m never sure what triggers it and what’s okay to discuss with the AI.

Is there a clear definition somewhere out there of what we’re not allowed to talk to O1 about?

Yes, that is not allowed.

I have had that happen to me and it took a few minutes with the help of others to reason out that was the cause. However some have noted they have received emails noting that if those prompts continue the account will be banned.

Not that I am aware.

I did check

https://help.openai.com/en/articles/9855712-openai-o1-models-faq-chatgpt-enterprise-and-edu

and even asked the AI bot but did not find anything related.