Output Token Limits for Instruct series

Regarding the Use Case Guidelines, I’m wondering if some of the limits need to be revised in light of the new Instruct series?

For example, blog titles are capped at 20 output tokens per generation, which is only sufficient for 1-1.5 blog titles (usually the 2nd one will get cut off). (Not sure why it’s so low though, it’s only blog titles!)

So we would need to generate it 3 times to create 3 blog titles.

But using the Instruct series, we can instruct the engine to output 3 titles using a single generation and capping the output tokens at 60 instead (20x3)? Should I try to get this approved or not bother?

Any insights are highly appreciated! Thanks.

1 Like

Thanks a lot. Just to clarify - for your example (n=3 and max_tokens=20), the maximum API calls is 3 times per minute, correct?

Based on:
https://beta.openai.com/docs/use-case-guidelines/use-case-requirements-library

Blog tools

Maximum rate limits for an end-user: 9 generations/minute, 135 generations/hour, 3 generations/action initiated (For example, if you return 3 generations per end-user action, they must be limited to 3 actions/minute)

1 Like