Moderation is way too sensitive - sora-2

Having difficulty getting generations with really benign input_reference images and prompts. It’s hard to imagine what could be triggering it.

The generation takes a long time (might just be time in “queued”), then eventually fails with error=VideoCreateError(code=‘moderation_blocked’)

Triggers:
Any people in reference images.
Any recognizable people in output images.
Others listed in the documentation for the model.

Sora-2 on the API will not allow you OpenAI’s product of ‘put me in a video with Bigfoot’.

1 Like

Yeah I read the docs.

images are of synthetic people, somewhat photorealistic

I think some prompts might accidentally elicit real people in the videos and then that triggers post-hoc auto-moderation. Like if your image has a guy whose back is turned but you describe him as “a director” and the model decides it looks like George Lucas and generates that, then a post hoc filter is triggered. No idea, just a guess.

Anyway the point of mentioning it is to get the developers’ attention, it seems like it needs tuning.

1 Like

From my testing, they are for sure checking your prompt + however that is converted into their “physics engine” to generate a start image (speculation), probably checking the start image, and my guess is they check every X frame of your video output. I’ll notice moderation fail very early if I poorly worded something in the prompt or typo’d, and then sometimes I realize my prompt may have pushed it into wrong direction, I’ll see my generation sitting at 90-99% and then get moderated, so I would agree with you on that. They are being extra safe, the stuff this can generate is very dangerous in all honesty, I’m glad they have it on a leash for now until it’s more stable.