How to ensure the number of output tokens is at least a word limit?

I have tried to set system behavior message like this:

You are an expert instruction follower. Please ENSURE the size of message in output is at least 20000 characters

but the output is always only like 500 tokens. I don’t know what is the best way to ensure this.

Welcome to the community.

What model are you using? Settings?

As it’s trained in “tokens,” not characters or words, it can be hard for the LLM to get specific word lengths correct.

Can you share your system prompt? More info?

1 Like