How to force GPT preserve desired output length in characters or words

Hi. I have a prompt and want to limit output by characters or words amount I provide. I trying to make it adding these lines to prompt (these lines written with help of GPT also):

"The narrative should be based on the descriptions provided and be precisely between 100 and 200 symbols as desired output length, including spaces and punctuation. "

I replacing min and max symbols dynamically with code. It works good only small values. However when I pass something like ‘between 500 and 600 symbols’, gpt often generate output larger than I ask. I tried different prompts to limit desired output length, but have no luck.
Have anyone faced the same issue and found a solution?
Could you advice where to dig to solve this problem?

I am using gpt-4-1106-preview

Welcome to the forum.

Being trained on tokens (pieces of words), it’s difficult for the LLM to count words correctly. One thing you can try is to ask for a bigger unit - sentences or paragraphs. Or, likely better, give a one-shot example of the exact length you want.

since one token may contain more than one character, you could consider setting your max_tokens parameter to some value slightly below 100. It won’t guarantee it’ll be less than 200 characters, but should help!

This may also be an example where finetuning is helpful since you want the output in a specific format.

@PaulBellow I love the idea of providing examples, so it would understand what is word length is, I`ll try, thank you

@cyzgab Ty, I believe max_tokens isnt exactly what I am trying to achieve. It would just hard cutoff generated output, isnt it?
Could you, clarify, please, what you mean by finetuning? What exactly I should try to fine tune? To tune model, so it would understand what is ‘length in characters’?