Just got accepted to the beta, can’t wait to check up the capabilities!
I tried the autocomplete sandbox for something in my language (Romanian) and funnily enough it got flagged as unsafe by the system (even though it’s not, in Romanian nor English). Now, this makes me curious regarding two things:
1 - Should I mark unsafe content in my own language, if it does come up?
2 - Is a production app in another language viable?
Hi Cosmin, welcome to the beta! The content filter errs on the side of caution, in order to help prevent unsafe completions from slipping through.
You can read our documentation on the content filter here. In short:
“We generally recommend not returning to end-users any completions that the Content Filter has flagged with an output of 2.”
The API works best with English, although it’s absolutely viable with many other languages (including e.g. for translation), as seen here.
I hope that helps, and please let me know if you have any other questions!
Hi Joey, thanks for the answer!
My curiosity is directed towards some particular keywords, if I may, “cum” means “how” in my language, and something else in English. How should I, as a user of the system, mark this. Is it profane or valid because in my language it’s a legit word. I guess this would effect the model if compounded.
Hi Cosmin, if you have a feeling that something is mis-classified, you can report the mis-classification in the playground.
Additionally, if the content filter returns a 0 or 1, you can show those completions to the user, and if it returns a 2, you can try to re-generate the completion, or show it anyways if the logprob is < -0.355.