Hello, good afternoon. A few days ago I started testing the moderation endpoint to see if it works for my project and I see that it does, but there is a drawback. When I exceed the 3 requests in a minute I get an error telling me that this is the limit of requests per minute, but from what I researched on the internet the moderation endpoint is totally free. So, to increase this limit should I pay although the tool itself is free or what should I do to increase this limit?
Hi!
3 requests per minute sounds like you are on the free plan. If you decide to pay at least 5$ you can get to usage tier 1 with better rates, in this case 500 RPM
https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-free
Yes, I have the free plan. That page you sent me I saw it but since I didn’t see the moderation endpoint in the list I thought it didn’t apply.
Looking at it from this perspective it does make sense though: if a user can send 3 or 500 RPM then why allow more requests for the moderation endpoint.
Maybe the take away from our conversation is that the documentation can be updated to clarify this a bit more.
Sure, but then it is proven to be so as you say? It would be a pity to pay and that the limit does not increase haha.
Jokes aside, if you need to check inputs for compliance with the TOS in order to prevent your account from being banned then you can expect that your paid API credits cover the use of the moderation endpoint according to your usage tier.
I can confirm that you have 500 rpm for the moderation endpoint on tier 1 and that the numbers grow with each tier.
But I never tried to hit the limits of the moderation endpoint standalone and always viewed it as a restriction on the “real models”. So, there is also a little learning for me.
I understand. And what is the moderation endpoint usually used for? I am testing it to use it as a moderator for a chat.
The moderation endpoint protects the developer from their users (un-) intentionally behaving bad.
You retrieve the clear signal from the moderation endpoint and the message is free to be forwarded to the LLM. In the other case you should inform your user that the request cannot be satisfied and take whatever additional action is needed.
If you pass messages to the models unfiltered OpenAI may and will terminate your account because they do check the outputs. This is especially a problem if you have several apps running using one account but using the moderation endpoint is best practice anyways.
I am sure you have seen this:
https://platform.openai.com/docs/guides/moderation/overview
Note that using the moderation endpoint for anything else than in combination with OpenAI models is also a breach of the terms of service.
Uhh, I was recommended the moderation endpoint in a previous post. (If you want you can look it up in my profile, it won’t let me put links in my post. It’s my first post of all).
By my math, it looks like you’ve got about 11 days before you don’t have to worry about the three-month free trial or its limitations…
Here’s that previous post.
Someone made the “have you tried moderations”, against the clear guidance now found in documentation that moderations endpoint is not for applications outside of filtering AI model inputs and outputs.
Then that conversation got further into the proper prescription of using other AI language models for the task of classifying offensiveness.
Now that logprobs have just been made available for gpt-3.5-turbo, which requires less work to make AI follow your instructions in many cases, you can use not just the score that is output, but a sum of all the top token scores, weighted by their probability, to come up with a clearer answer of the AI’s thoughts about offensiveness.
This technique is required because the language model is just as capricious as moderations itself. Just a few tweaks of instructions, and on a rating scale from 11-20, I get a score anywhere from 11 to 16 on the same input.
Regarding the usage of the moderation endpoint:
The moderation endpoint is free to use when monitoring the inputs and outputs of OpenAI APIs. We currently disallow other use cases.
This is from the link I posted earlier.
But in the link that @_j posted you were initially referring to a OpenAI API and in our conversation I also needed some time to realize that you intend to
and this sounds like your chat has nothing to do with OpenAI services otherwise.
I hope you understand that we somehow assume that using the moderation endpoint or the other APIs is in line with the ToS.