API Response does not reach token limit

I am generating articles using API calls. Although I am using max_token=2000 it always returns articles with below 1000 words.

Words are not tokens though. One word is approx 1.3 tokens. Did you check that it’s the correct amount with the tokenizer tool? OpenAI API

For example, something like max_token=2000 is 5 tokens, even though there are no spaces.

I am taufiq, replying from different account.

Yes. I am aware about the relation between token and word count. Also, relation among prompt token, response token and total token.

The true thing is OpenAI API generally does not reach 2000 or 2500 tokes even if I select more higher limit and design prompt differently. Could you please share an example prompt here to generate an article with 2000 tokens in response.

max_tokens isn’t used to influence the length of the response, just to give a hard limit (it will just cut off at the limit).

If you want a longer response you’ll need to do it through prompt engineering (“Write a 4 paragraph article about…”). It’s also not great at reaching exact word counts, see related discussion.

1 Like