Using Finetuned model in openai.Moderation.create

I have a finetuned a model for content moderation. But when I use the finetuned model as follows :
openai.Moderation.create(
#use the finetuned model here
model =‘text-moderation-latest:’+FINETUNED_MODEL
)
I get the following error prompting me to use the generally available model

*ValueError: The parameter model should be chosen from [‘text-moderation-stable’, ‘text-moderation-latest’] and it is default to be None.

Welcome to the community.

I don’t believe you can use your own model for content moderation… ie the error message that you have to choose text-moderation-stable or text-moderation-latest…

1 Like

Thanks Paul.

Appreciate your quick response.
Is there any plans/roadmaps to fine tune custom COMO models in the future to cater to specific domains.?

Best,

Ravi

No problem. I don’t work for OpenAI, so I’m not sure. Might be something you reach out to them about, though. Either here in the Feedback topic or via support.

Are the normal ones not working for you? Or are you trying to do something different?

Of course it works

Little more context:

Say for instance if I want to add more filters other than the regular off the shelf ones (hate, violence, etc, )currently provided by text-moderation-latest engine, then I can not now.

1 Like

Ah, I gotcha. You might want to fine-tune a normal model Davinci or even Curie maybe… and use that separately…

You’d need a few hundred examples, at least, but I think it could probably be done…

2 Likes

Use embedding to create an offline classifier. It will save you calls to the API and you can create any classifications you need - but you will need a decent sample size.

In a different use case, it works extremely well with using emails pulled from a junk folder vs non junk email (GMail did the original junk fltering)

2 Likes

I guess I should do that.

1 Like