ResponsibleAIPolicyViolation Error due to particular prompt and image passed

Hi devs and community members :wave:,

We are building app which detects if there is any hate, abuse (all types) or injury. Here, user is passing prompt + image to detect.

Issue : We are getting this error. Whenever any hate, abuse image is passed.

Error code: 400 - {‘error’: {‘inner_error’: {‘code’: ‘ResponsibleAIPolicyViolation’, ‘content_filter_results’: {‘sexual’: {‘filtered’: False, ‘severity’: ‘safe’}, ‘violence’: {‘filtered’: True, ‘severity’: ‘high’}, ‘hate’: {‘filtered’: False, ‘severity’: ‘safe’}, ‘self_harm’: {‘filtered’: False, ‘severity’: ‘safe’}}}, ‘code’: ‘content_filter’, ‘message’: "The response was filtered due to the prompt triggering Azure OpenAI’s content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: ‘param’: ‘prompt’, ‘type’: None}}

Blocker: We can’t lower down severity level of content filtering since there are other apps which are built on same model keys.

Whoever reads, please :face_holding_back_tears: give any random thoughts. I will think over it. :innocent:

Hello and welcome to the community!

There is a free-to-use moderation endpoint that you can use to classify your content.

I suggest trying the new omni-moderation-latest endpoint, as it supports both images and text.

In the past, using this tool was partially restricted and free only for users of other models offered by OpenAI. Currently, it is free for everyone, but there is no guarantee that this situation won’t change in the future.

Sending content that violates the usage policies to the OpenAI models will eventually lead to your developer account being closed.

1 Like