OC in moderations pretty much gave up

Trying to use the Moderations API call to find Objectional Content in stories. It is missing pretty much everything now. We had a post about sexual assault and nothing was flagged. Beating someone up till they died didn’t even get flagged for violence.

This use to work, what happened? We need a reliable model to flag these type of things.

Sorry but we use to use the old “text-davinci-003” worked a million times better than the crap used now. Why did they ever remove that model, it worked so well. The new gpt-3.5-turbo-instruct sucks. We ended up changing to Claude Anthropic due to the quality in OpenAI dropping so much.

You need to set your own custom levels by making use of the floating point numbers you get back.

The moderation endpoint must be flexible enough to allow movie script writers and books authors difficult topics to be processed, it’s not meant to be a single option checker.