Max tokens - how to get GPT to use the maximum available tokens?

Using GPT3.5 API.

What is the default for max_tokens? Would the default be the maximum available?

I just want the AI to use the maximum tokens available to it.

To do this it seems I have to calculate how many I’ve used and telling what’s likely left to use for the response. - is this correct? or is there a simpler way?

also,
Is GPT aware of the max_token parameter and attempts to limit is response? or is this a hard limit that just blocks the response when it goes over?

Hi,

Yes, omitting the max_tokens parameter for GPT3.5 will give you the maximum token size possible for the reply.

It is not “aware” of the limits, it will generate tokens until there is a likley possible stop token generated.

The max_tokens parameter in the GPT-3.5 API is optional. If set, it limits the response to that number of tokens. If not set, the limit is the model’s max capacity (4096 tokens for GPT-3.5 Turbo) minus the tokens used in the prompt. The max_tokens serves as a hard cap, truncating the response if the limit is reached. To utilize the maximum tokens available, you’d need to calculate the remaining tokens based on the tokens used in your prompt.