Is this an allowed use of Moderation Endpoint?

I am thinking of using OpenAIs APIs to moderate my online public forums as well as chat.

I’m thinking of a pipeline doing the following to scan discord messages:

Message Stream/Post/Comments from Discord/Forum/etc → Moderation Endpoint → Finetuned or standard OpenAI GPT 4.1 Nano model for action taking.


This will likely give good results with Nano being able to filter out false positives. I’ve estimated the cost with using Llama Guard for moderation to be relatively tolerable (sub 10 EUR monthly for a lot of input data), but OpenAI is free and likely handles higher throughput so i’m thinking of that in context).

More so, Nano is both cheap as a model as well as supports finetuning which is useful for cutting down costs and adapting it to take actions on the users’ message.

I’m interested if this would fall off as a non-supported use (as most moderation checks will likely not be followed by API execution as only positive identifications are handled).

On one hand I saw previous posts that mentione da rule that now disappeared (about disallowing it’s use outside of pre-filtering api use). I also think OpenAI wouldn’t mind the data from toxic video game communities.

On the other hand it’s still extra load on the providers so i’m unsure.