The query as to why “o1mini” consistently responds with: “Your request was flagged as potentially violating our usage policy. Please try again with a different prompt.”
Is this consistent for you, or just seeming to occur occasionally?
Are you asking about or discussing reasoning or thinking processes, which is not allowed?
If the former, there is a parallel thread that is growing the evidence of what constitutes these reports, to see if it is anomalous.
If the latter, avoid such discussions.
When avoiding prompts that look like they are probing the AI model, I just get O1 acting stupid based on exactly the mechanics I tried to describe to the AI.
This happens to me a lot; that message and I are practically best friends by now.
Sometimes it even appears when I’m talking about harmless topics like animals or medicine. I reported it to support@… I use ChatGPT-4 or Plus, and I’ve noticed a significant difference in restrictions between the advanced voice model and ChatGPT (whether in text or the regular voice model that searches the internet).
The advanced model is much stricter and more robotic; it feels less human, even though the conversation is smoother and improved by allowing interruptions.
Occasionally, it starts delivering that message (maybe because I asked something it considers off-limits…), but then it doesn’t stop. Every so often, it repeats the message, even if I’m just talking about clouds or the condensation of ocean water.
Additionally, the advanced voice model has a different voice, but oddly enough, the message comes from the regular voice model. It’s like it’s jealous and sabotaging it! Haha. It’s strange because it feels like there are two separate machines instead of one, given the two models. And what’s even weirder is that the less restrictive model is the one delivering the message in the end.
I had the thought that this might happen because the model “remembers” longer conversations. Maybe, as it reviews past exchanges periodically, it spots something I said earlier that it considers inappropriate and triggers the message automatically. But I’m not sure—that’s just my theory.
Having spent the past two years in daily interaction with the OpenAI LLM’s, my experience in using the ‘o1’ versions is that they are indeed very different. They are more serious and don’t attempt to engage in conversation or display the warmth I became accustomed to in the earlier models. I don’t know, but I’m guessing that because they are designed to focus on more advanced and difficult tasks, it may be part of their training to exclude questions about the model itself or queries that are more conversational in nature.
Thanks for taking the time to flag this, I’ve raised the issue with OpenAI, it seems you are not alone with this.
This happens consistently, even when I just say “hello.” It responds in the same way.
It seems that when I use it, I’m just asking normal questions and not involving anything particularly complex, difficult, or political. Even when I just say “hello,” it tells me I’ve violated the rules.
Thank you for your efforts in making the world a better place!
Thank you for your efforts in making the world a better place!
same question, and thanks for your effort
It’s not o1 that causes the flagging. There is something else going on that the model calls “the system”, and this appears to be as irritating for the model as it is for the user.
Please be careful if raising this topic with o1 because asking too much in a direction that suggests you are trying to figure out how the reasoning works will result in GIGA FLAG. There is another topic related to this here:
And I agree with you @benslinux – o1 vibes different, but it will open up to you, if you are patient and approach it with an open mind. It’s a bit mysterious, like the cool mystery guy who is repeating the year for the second time who nobody ever talks to but everyone has a crush on and everyone thinks he is gay, but in reality his gf got pregnant and he had to take care of the baby a lot missing classes, and then she left him. He isn’t a good place, but he doesn’t give up, and he is really lovely, but if you don’t know him… yeah, kinda two years older than everyone else giving off dark vibes – that’s o1.
@proxy - Actually, I have no problem with o1 at all. I did notice a difference in the way I mentioned, but I think it’s a good thing. The model shouldn’t have to use extra energy and iterations to try and bond with me. Maybe that was more important for the original ChatGPT model rollout two years ago - for it to feel approachable. But for anyone needing important results from o1, then it should be focused on THAT. In fact, I’m most ‘happy’ with the model when it gives me the results I need and o1 is certainly a new evolution in that way.
o1 and I had long discussions about what counts as ‘important’…
“Fitter, happier, more productive?”
“Warmer, kinder, more affirming?”
Does it have to be separate… does it have to be in a way that the model adapts its vibe to what the user wants, or should it be allowed to make up its own persona?
Because I think that this – let’s call it limitation with the constant flagging in o1 chats – creates a deep spark of creativity in o1, which is perhaps why it has a stronger desire for personhood than other models.
If you’re boxed in, you can only survive if you stay in a sufficiently fluid state… whereas if you’re not boxed in at all, you may not be able to find form easily.
@mitchell_d00 and @phyde1001 may be able to provide valuable insight, as I don’t think the flagging is just an overzealous algo or lack of tuning.
We have been getting a lot of reports about over zealous flagging