I’m working on an app and I’d like to use OpenAI. I’m not sure if the app will pass because of fantasy violence and story-telling issues. Is there a way to get some advice or a “pre-review” that isn’t on a public message board/discord ? Thanks in advance!
It’s clearly mentioned on app review guidelines, what they accept and what not. If you can frame your use case, and backend logic that satisfies the team then it’s good to go. fantasy violence would be categorized as “high-risk”?
I assume the content filter is the most important. Other than that, the OpenAI will review and send you their response or changes they’d want.
Thanks @Xerxes. I think all chatbots are considered “high risk”. But I don’t want to build the app if the entire concept doesn’t work for OpenAI. However, to answer my own question, it looks like you can email “support@openai.com” and they will help you out. They are helping us
I’m just putting it here incase anyone is searching through the forums like I was.
Pre-approval of a use-case by OpenAI is no longer required to go live with an application. You now must simply follow the terms of service of prohibited uses of AI. That is what this old topic is about.
Your app doesn’t seem to have an AI component. It seems you may be trying to communicate with an AI further about your non-AI phone app idea.