What is the default value for max_tokens parameter in chat-completion API? See screenshot below. This is the only parameter for which OpenAI dev team didn’t indicate the default value - why?
Per earlier docs, on the chat completions endpoint, by default the max_tokens takes the available tokens context length from the model’s context length with a max value of up to 4096, as far as I remember. This was different for vision models, which by default had max_tokens set to 256 tokens.