How to force openai to print out output larger than 3300 symbols?

i am trying to make a story telling, but all of api models usually only generates 2,500 characters and rarely gives me maximum of 4000 symbols as result.
i was tried all models gpt4,gpt3, 16k, 32k, turbo, etc… and changed prompts.
nothing helps

i need from 5k to 10k symbols as good output result.

any ideas?

As far as I know, output token limit of all models is 4k tokens.

https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo Ref link. It’s 4096 output tokens, to be accurate.

The AI models are able to perform most of the tasks they do by training. That is: training on the type of output expected for a particular input.

They are not trained to write 10k token stories in response to input. In fact, the opposite, where you ask for something long, you get shut off and the output wrapped up prematurely.

If you want to see how poor of performance an AI is without “write me a story” training, you can use a base model on the completions endpoint such as davinci-002. It has a 16k context length, and almost all of that could be used for output if you could get it to make sense past 500 tokens.

The base model is untrained to “chat”. However, it is a model that can be fine-tuned. If you were to train the AI on hundreds of examples where an input is offered and then a cohesive maximum output is produced, it might be possible, but one mis-step in the middle of production of tokens and your story will go off the deep end.

Here’s an example case from another community member where fine-tuning yielded good results including in terms of length:

So can you give me solution how to make 10K symbols storytelling from api???

There’s no silver bullet solution here, I’m afraid.

You can try to fine-tune a model based on training examples that indicate your desired nature and length of the output (see example provided). This could get you to that length of output with a bit of luck.

General information on fine-tuning is available here:
https://platform.openai.com/docs/guides/fine-tuning

If you don’t wish to fine-tune, you’d have to go with a workaround solution and combine the outputs of multiple API calls to achieve this length.