AMA on the 17th of December with OpenAI's API Team: Post Your Questions Here

Preference fine-tuning can encourage longer responses, however we still have a maximum number of output content tokens for each model.

3 Likes