You could fine tune 3.5-turbo if you have lots of example text and the resulting moderation decision for that text. These would have to be human curated to start with, but you could then start to use the model to make some determinations and then correct any incorrect ones and use that additional data to train a more accurate version, rinse and repeat until you have a percentile accuracy you are happy with.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Stuck in Post Moderation Mud ā Anyone Got a Winch? š | 4 | 591 | April 14, 2024 | |
What's better for the type of chatbot I am building? Fine tune or embedding? | 10 | 2255 | August 20, 2023 | |
Fine-tuning an OpenAI GPT-3 Model via API for Company Language | 5 | 1290 | December 17, 2023 | |
Usage policy violations with fine-tuned model - how can I avoid this? | 2 | 1248 | October 20, 2023 | |
Custom Moderation GPT Model | Fine Tuning | 3 | 127 | July 25, 2024 |