Can I use Open AI Moderation API agaist non-Open AI content? Part 2

The documentation used to say

The moderation endpoint is free to use when monitoring the inputs and outputs of OpenAI APIs. We currently disallow other use cases.

Now it says

The moderation endpoint is free to use for most developers.

https://platform.openai.com/docs/guides/moderation/overview

Does this mean now I can use the Open AI Moderation API against non-Open AI generated content and prompts?

3 Likes

Sounds more like “is not free to use for…” (insert accounts who must pay soon).

1 Like

Interesting. Good catch.

I wonder if it will be free for OAI users/uses and a fee for others?

Could be a tidy little revenue stream for them, I imagine.

This was changed a little over two weeks ago,

The full list of changes for this help article is,

The moderations endpoint is a tool you can use to check whether content complies with OpenAI’s usage policiestext is potentially harmful. Developers can thuse it to identify content that our usage policies prohibitsmight be harmful and take action, for instance by filtering it.

The moderation endpoint is free to use when monitoring the inputs and outputs of OpenAI APIs. We currently disallow other use cases. Accuracy may be lower on longer pieces of textfor most developers.

Below is an example output of the endpoint. It returns the following fields:

  • flagged: Set to true if the model classifies the content as violating OpenAI’s usage policiespotentially harmful, false otherwise.
  • categories: Contains a dictionary of per-category binary usage policiesviolation flags. For each category, the value is true if the model flags the corresponding category as violated, false otherwise.

OpenAI willWe plan to continuously upgrade the moderation endpoint’s underlying model. Therefore, custom policies that rely on category_scorescategory_scores may need recalibration over time.

My read on this is OpenAI may be looking to expand the use and usefulness of the moderation endpoint beyond filtering what is sent to the models. I imagine use for the original purpose of preventing bad inputs from making it to the model will continue to be free, limited light use for other purposes might be free, and use at scale may require some sort of paid contract.

But, absent an official announcement from OpenAI, it would likely be most prudent to continue to use the moderations endpoint as originally intended.

3 Likes

Yeah, it could be a nice “little” revenue stream for them.

Solid advice, @elmstedt!

My read of this is that OpenAI phrased any commitment on the side of OpenAI out of it.

Read:

  • You cannot use the moderations endpoint to determine whether content violates OpenAI’s usage policies or not. Use of the moderations endpoint does not indemnify you as a developer, and we may still close your account for submitting false negatives to the model.

  • OpenAI makes no commitment to upgrading the moderation endpoint, but it’s possible that it may be planned.

2 Likes

I think it would be great if they expanded the use of this model. I would love to use it in other applications that do not have to do with OpenAI.

From the FAQ:

The moderation endpoint is free to use when monitoring the inputs and outputs of OpenAI APIs. We currently disallow other use cases.

https://help.openai.com/en/articles/4936833-are-the-moderation-endpoint-and-content-filter-free-to-use