Using OpenAI to detect toxicity in comments?

I’m wondering if (and how) it would be possible to use OpenAI API to detect toxicity in a comment.

I have been eying perspectiveapi.com but it does not have swedish language support, but it seems that OpenAI understands swedish.

2 Likes

Welcome to the forum @arvidson

:wave: Hi!

A very cheap and easy solution could be to GOOGLE translate the comment and submit the result to the OpenAI toxicity engine.

Are you referring to https://beta.openai.com/docs/engines/content-filter är is there another toxicity engine? Or a way to create a prompt that finds toxicity in a comment?

My understanding is that the content-filter is a way to ensure that results coming from the API is not toxic.

Sorting out the false positives and investigating within the limits of the law and contractual agreements is a completely different story.

Yeah. I agree! :slight_smile: