Question regarding max_tokens

Hi @florianwalther

It completely depends on the prompt.

Here’s the definition of max_tokens in API Reference:

The maximum number of tokens to generate in the completion.

The token count of your prompt plus max_tokens cannot exceed the model’s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).