Successful Pre-launch Review Request examples?

Has OpenAI or the makers of any approved GPT-3 application ever published their Pre-Launch Review Request? Being able to see examples of successfully approved requests would give others an idea of how best to draft and scope their own requests, and what level of detail to use. Could be really helpful for the community if there are teams willing to share!

4 Likes

Hi @jkane this is a very timely post! I recently joined OpenAI to work on the user experience of our API with respect to safety - in particular I’m hoping to get the safety specialist team more engaged with this forum. Please feel free to say more about what would be helpful here - and let me know if you have any other thoughts on how what would be helpful from a developer perspective with respect to our safety processes!

4 Likes

Excellent idea jkane! That would help tremendously and could save time on both ends. In addition, information on requests that have been declined could also be informative. Have any video tutorials been created for this?

2 Likes

Congrats on the new role, @rosiecam! In general I am just wondering how in-depth to go with my request (have 2-3 ideas in development now) and where to focus. Based on your response, is it generally the case that safety is the primary metric for approval? I.e. you’re less concerned with how compelling or original the use case is per se, but just that the application has no or extremely low likelihood for abuse or unsafe content generation? If that’s true, that would be helpful to know as I draft requests. Do we need to be extremely specific about technical architecture or will a high-level summary suffice?

Those are specific questions I have, but if it were possible for your team to publish some real-life approved example requests, or else fabricate a few (GPT-3 could probably write them, lol), that would probably help inform a lot of users in the community!

1 Like

Yes that’s correct - the primary purpose of the pre-launch review is to ensure the safety of the products being built with the API - make sure you’ve read the use case guidelines! A high-level summary of the system should be fine - we can always follow up with you if we need more information :slight_smile:

I should also say that we want to help people get approved - so even if we turn down a request we try to provide feedback so that the product can be amended to meet our safety policies, and we can be reached via email if you need to provide clarifications or have any questions.

5 Likes

Thanks, @rosiecam for your information.

Do I need to meet all the requirements in the Prelaunch questionnaire in order to get approved? There’re some aspects that I want to improve gradually, and want to get the product approved as soon as possible.

The reason is if I spend time working on the product in order to meet all the questions or concerns. It may take too much time & customers may not want to use the product to start with.

1 Like

Can you say more about what you mean by the requirements in the prelaunch questionnaire? Or do you mean the use case guidelines? The questionnaire itself doesn’t contain requirements - you should answer the questions as best as you can to help us get an idea of any potential risks that we may need to mitigate.

It would be helpful to know what kind of aspects you have in mind that you’re concerned about?

I agree examples might be a huge asset to the community if there are those who would be willing to share. I think the issue is that there is understandably some grey-area in what could reasonably be approved. I think this is a good thing-- each use case is contextual.

I, for example, am curious about the limitations on text summarization, and what counts as “user input.” If a user submits documents to the files and then can query these documents with the answers endpoint. What counts as user input? You could essentially use this as a runaround of the text summarization limitations: submit text as a document, use answer endpoint to summarize document.

Happy for an answer to this question. But others will have different question. I think seeing reviews that help us understand the ethos of OpenAI in evaluating grey-area use cases would help. It’s unclear if things that are in the grey area are “likely to be approved, but need reviewed” or “highly unlikely to be approved unless extremely well defended” or somewhere in between.