Policies regarding processing potentially harmful contents

Hello All!,

I am interested in using OpenAI’s Whisper and GPT APIs to develop an application that can transcribe, translate, and detect potential hate speech in messages received in foreign languages.

My goal is to ensure understanding and safety in multilingual communications, but I am concerned about adhering to OpenAI’s content policies and handling sensitive material appropriately. I am seeking advice on how to build this application within ethical guidelines and respectful of OpenAI’s content policy.

Any insights or recommendations on this matter would be greatly appreciated!

Thanks

Use this models and its free


guide : OpenAI Platform

Additional Info :
To be honest, I use two models for text moderation.
Here’s how it works:

If the text-moderation-latest model is still detecting flagged false, then it falls back to the text-moderation-stable model. However, if both models detect flagged false, then it continues to chat completions.

So If inappropriate content is detected flagged true by either the text-moderation-latest or text-moderation-stable models, the process does not continue to chat completions.

1 Like

Thank you @b0zal for your response. But what about the Whisper API? I am dealing with audio messages.

well yeah its missing about that , since moderation right now it’s only input text.

1 Like

You would need to reach out to OpenAI to discus it, it would certainly break the current Terms of Service as written.

You could try Contact sales or support@openai.com is that email address is still operational, or you could try the support bot in the bottom right corner of help.openai.com

1 Like

Thank you @Foxalabs and @b0zal for your responses. I will contact OpenAI’s support.

3 Likes