API Endpoints with Integrated Content Moderation

ChatGPT is an end-user product and that uses the moderation endpoint really well.



Don’t you think cutting the number of requests per second in half is a good idea? lol. I do.

When developing a tool for everyone, also the most commonly occurring usage patterns become particular relevant. If most of the time a call does moderation+completion, that’s a pretty good tip you need an endpoint that encapsulates exactly that.

I 100% agree with @sps

But I fail to see the logic here

Why would this cut the number of requests per second in half?

1 Like

“API Endpoints with Integrated Content Moderation” take one HTTP request. Without integration it’s two.

You made a whole lot of assumptions I wasn’t implying. I’m just saying have a “moderate=true” flag, and if I set that the server can simply throw a bad request error, with the reason. It’s not mixing responsibilities. It’s good API design. If something takes 10 steps you don’t always tell the API consumers to make 10 HTTP requests. It’s an art not a science.

HTTP APIs normally have all kinds of different “reasons” that any particular request can fail. I’m just saying “bad morality” is most certainly one of the reasons the “completions” should be able to fail. And from my 23yrs exp as a dev, I can tell you it’s about 3 lines of code to check this and throw an exception in OpenAI’s implementation. Aside from adding “dontDingMeBro” as an argument.

The rate limits are counted per endpoint, using the moderation endpoint is not going to count towards your rate limits on completion requests :laughing:

Regardless of academically good API design, Azure already does this. They run filtering on all chat completion requests and return moderation errors from that endpoint.

Good point. That would imply an “immoral query” attempt would cost nothing, because they refused to answer it. That’s identical to submitting a moderation endpoint query that gets failed. Currently they already offer the pure moderation endpoint for free. Correct.

To me that’s just silly and would result in a lot of unhandled errors

Well I think that’s the crux of the issue in these threads. Azure thinks that it’s important enough to strictly enforce all requests should be moderated before running. OpenAI doesn’t, and while OpenAI may punish you if you don’t moderate it, the documentation could certainly be more clear around when to use it and the ramifications of not using it.

1 Like

Will take it as an action item to make it more clear people should use moderation, our best practices and safety best practices already suggest this but will look for more places to add it.


yes, they should. My service is partly backed by GPT and the actions of users of my service could lead to my api access to be revoked. I am implementing a moderation verification before the api call now, but I hate that this is even a thing. The API could just respond with a code (HTML - like) that says 403 Forbidden when against the guidelines and the api user can implement a catch