Hi devs and community members ,
We are building app which detects if there is any hate, abuse (all types) or injury. Here, user is passing prompt + image to detect.
Issue : We are getting this error. Whenever any hate, abuse image is passed.
Error code: 400 - {‘error’: {‘inner_error’: {‘code’: ‘ResponsibleAIPolicyViolation’, ‘content_filter_results’: {‘sexual’: {‘filtered’: False, ‘severity’: ‘safe’}, ‘violence’: {‘filtered’: True, ‘severity’: ‘high’}, ‘hate’: {‘filtered’: False, ‘severity’: ‘safe’}, ‘self_harm’: {‘filtered’: False, ‘severity’: ‘safe’}}}, ‘code’: ‘content_filter’, ‘message’: "The response was filtered due to the prompt triggering Azure OpenAI’s content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: ‘param’: ‘prompt’, ‘type’: None}}
Blocker: We can’t lower down severity level of content filtering since there are other apps which are built on same model keys.
Whoever reads, please give any random thoughts. I will think over it.