Can the moderation API be used or non OpenAI related content?

Can I use the moderation endpoint to moderate the user generated content of an app I’m building? I don’t wanna use any other openai apis but only the moderation endpoint to moderate my content. Is this allowed?

Thanks!

Documentation:

The moderation endpoint is free to use when monitoring the inputs and outputs of OpenAI APIs. We currently disallow other use cases.

1 Like

Yes, that’s why I got confused. Why are other use cases disallowed? Is it because that endpoint is free? Then, if this is not allowed, will I have to implement a separate moderation api using the gpt endpoint for my case? Doesn’t really make sense…

The main purpose is that AI models can make unpredictable outputs.

You might want to know about purple dinosaurs, but then it goes into hippie free love drug culture - or just outputs random neural knowledge.

It also protects you against customer inputs that would stimulate the AI into naughtyness, violating OpenAI policy, detected similarly.

It’s not a free content filter for your forum. Nor does it really apply in the same way, like where you would want a word filter, because moderations is more concept-based for particular subjects (and voodoo too).

That still doesn’t make sense. Then, this moderation api should also be allowed to be usable for non OpenAI related AI models inputs / outputs. Like open source ones that have a higher tendency to create NSFW content. Why disallow it?

Ok so, I spent the last night building a custom moderation system that uses openAI’s gpt models. However, now when I want to use it, will I have to first put every single prompt inside the original openai moderation endpoint to ensure the inputs fit openai’s terms?

@anon92244787

As this topic has a selected solution, care if I close this topic?

Nope, I can’t really accept that answer as a solution.

1 Like

Thanks for responding and removing the selection on the topic.

That’s a pretty astute observation. If OpenAI is using matching techology themselves to detect OpenAI content policy violations, so should you before it gets to them. I’ve high-scored the moderations endpoint and still have my account.

Moderations may be vastly different from what you’d allow.

This means I can’t really create a custom moderation API for my uses cases using openai because for some unique cases I’d want to allow, it won’t be able to fit openai’s moderation policy in the first step. Am I right?

You are sort of right. The flagging takes a pretty high threshold, but is also unpredictable.

There might be flags you’d allow through, like on your suicide survivors forum where they talk about cutting.

Or certainly things the moderations won’t flag, like swear words or crypto fraud scams.

You really can create a custom moderation API for those latter use cases.

You also are given moderation scores along with the flag boolean, where you can use those instead for a category, find out the flagging level and its score, and adjust. We don’t have guidance on if you are not protected if you’ve re-tuned them.

call moderations and never API: not good
call API and never moderations: a ban decision may not favor you.

Got it, ok I think I can make this answer as the solution now. I’m really gonna have to use an open source model like Mistral or something for my case then. I don’t want to think of getting banned or not as it would collapse the product I’m building. I have a tool in my hand, I pay for it and I want it to do whatever I want, whenever I want.

@anon92244787

As this topic has a selected solution, care if I close this topic?

1 Like

I will take the like on the question as a yes and close this topic.

Thanks