Content Completer requirements library - are the rules same for everyone?

Hi,

Going through the ‘Requirements library’ in the Use Case guidelines (link), I can see that the maximum output specified for ‘Content completer’ is 30 tokens.

I also understand that just because another developer might have received approval for a use-case, it does not automatically entitle others to get approved as well.

However, I see one specific company, Conversion[dot]ai, going way beyond the 30 token limit for content completion.

As you can see in the following video (at the 12 minute 7 second mark), they’re generating around a paragraph of content (so about 70 tokens more than the specified limit) when the user hits ‘generate’: How to create a COMPLETE ARTICLE with AI (Jarvis.ai LONG-FORM) [ENTIRE PROCESS] - YouTube

Again, I absolutely respect the fact that they must have done their due diligence to ensure safety, based on which the OpenAI team must have given them the approval to go beyond the 30 token limit for content completion…

However, is it even possible now for new developers/companies coming onboard to get approval for the same exception as conversion[dot]ai - provided, of course, that ample security checks and fail-safes are put into place to avoid misuse, and the app essentially matches the same level of security and risk mitigations as conversion[dot]ai?

If the answer is simply ‘no’, then isn’t OpenAI helping create monopolies for those who came ‘before’?

3 Likes

Pinging @ishant.singh for his insight :slight_smile:

Well, if 2 developers are able to meet the same level of (security) requirements, then the rules should be equal for them both - and approvals for exceptions (like the one mentioned in the original post) should be considered on a case-by-case basis. Straight off saying ‘no’ to the one late to the party is neither fair nor open.

I agree that it’s not fair to retrofit existing approvals, but rejecting or severely limiting new developers (in comparison to ones already approved) makes the platform unfair.

Imagine Apple saying that no new developers will be able to use the onboard camera simply because they found that there were privacy risks! It’s totally right to raise the bar for security and requirements, but straight off rejecting new approvals (for exceptions) does not make any sense.

2 Likes

I’d be very curious to hear what the OpenAI team has to say about this… @Adam-OpenAI @joey

Hi everyone,

Thank you for sharing these concerns.

Sometimes there can be other mitigating factors that allow for an application to be approved or not approved (e.g. being a legacy approval, working with select partners to help us evolve our guidelines, etc).

Moreover, many applications are using other APIs for certain features (not GPT-3), so those features are naturally not subject to our guidelines.

Best,
Joey

2 Likes

Have you seen Shortlyai.com?

I still don’t understand how OpenAI allowed it to work the way it works. Just prompted a random idea and it ended up generating 147 tokens (according to the tokenizer). And with all the advanced commands it’s basically a copy of the OpenAI’s Playground.

What security are we talking about here?

And why the rules are different for new developers?

I agree with @timusk. It’s neither fair nor open.

I hope the issue is not with the OpenAI’s policy but with the unclear rules that fail to mention the criteria for the exceptions from the rules.

Example:

Disallowed: Topic-based generation
  • We don’t allow tools that let users generate more than ~2 sentences on a topic of their choosing (30 tokens)
  • This disallows functions like “Writing a paragraph on a topic of your choice”, or similar features

Shortlyai:
https://help.shortlyai.com/getting-started/slash-commands

1 Like

To add on to @m-a.schenk’s answer, you can see our responses in the FAQ section of use-case documentation.

My use case looks like what company X is doing. Why do you allow X to do it but not my use case?

Sometimes there can be other mitigating factors that allow for an application to be approved or not approved (eg being a legacy approval, working with select partners to help us evolve our guidelines, etc).

How will these guidelines evolve over time?

The OpenAI team will update these guidelines from time to time and hopes to increase the amount that is safely doable with the OpenAI API. In particular, the methods we use are:

  • Through risk mitigation: We are working to develop new technical methods and assurance strategies to manage risks, and over time we hope to unlock many use cases that are currently restricted.
  • Through partnerships: We are interested in partnering with a handful of developers to unlock some of the higher risk use cases. We are especially motivated to work with developers who are established domain expertise and who can help us identify and mitigate the risks in those areas.
  • Through experience: We expect to increase the level of detail in our guidelines over time as we learn more.
3 Likes