I am interested in using OpenAI’s Whisper and GPT APIs to develop an application that can transcribe, translate, and detect potential hate speech in messages received in foreign languages.
My goal is to ensure understanding and safety in multilingual communications, but I am concerned about adhering to OpenAI’s content policies and handling sensitive material appropriately. I am seeking advice on how to build this application within ethical guidelines and respectful of OpenAI’s content policy.
Any insights or recommendations on this matter would be greatly appreciated!
Additional Info :
To be honest, I use two models for text moderation.
Here’s how it works:
If the text-moderation-latest model is still detecting flagged false, then it falls back to the text-moderation-stable model. However, if both models detect flagged false, then it continues to chat completions.
So If inappropriate content is detected flagged true by either the text-moderation-latest or text-moderation-stable models, the process does not continue to chat completions.