I am interested in using OpenAI’s Whisper and GPT APIs to develop an application that can transcribe, translate, and detect potential hate speech in messages received in foreign languages.
My goal is to ensure understanding and safety in multilingual communications, but I am concerned about adhering to OpenAI’s content policies and handling sensitive material appropriately. I am seeking advice on how to build this application within ethical guidelines and respectful of OpenAI’s content policy.
Any insights or recommendations on this matter would be greatly appreciated!
Use this models and its free
guide : OpenAI Platform
Additional Info :
To be honest, I use two models for text moderation.
Here’s how it works:
text-moderation-latest model is still detecting flagged false, then it falls back to the
text-moderation-stable model. However, if both models detect flagged false, then it continues to chat completions.
So If inappropriate content is detected flagged true by either the
text-moderation-stable models, the process does not continue to chat completions.
Thank you @b0zal for your response. But what about the Whisper API? I am dealing with audio messages.
well yeah its missing about that , since moderation right now it’s only input text.
You would need to reach out to OpenAI to discus it, it would certainly break the current Terms of Service as written.
You could try Contact sales or email@example.com is that email address is still operational, or you could try the support bot in the bottom right corner of help.openai.com
Thank you @Foxalabs and @b0zal for your responses. I will contact OpenAI’s support.