Max_Tokens - Best practice for long-form answers?

Tried some methods to determine the optimal max_token size for creating long-form content, I have discovered that the most straightforward approach for the GPT-3.5/4 API is to just omit it entirely. By leaving the max_token variable blank, I am able to get the longest response possible for a given prompt size. Whether this is a function or an anomaly remains uncertain to me. I intend to continue using this strategy and hope it will not be remedied. :wink: Are there others who have adopted this technique?

1 Like

In you leave max tokens off, it defaults to the maximum possible. This is for gpt 3.5 and 4

Version 3 required you to set the value. It looks like the new default is by design

3 Likes

In you leave max tokens off, it defaults to the maximum possible.
:wink: exactly … no need to calculate it anymore like with davinci3.

1 Like

At the moment this is not true, if I don’t enter anything as max tokens it will give me a truncated response. Unfortunately there doesn’t seem to be a js library I can use to calculate the number of tokens either.

1 Like

Still seems to work for me. But you could just use the 4char / token calculation to get in the ballpark.

1 Like