Bug: Fine-tuned model max output is always 100 tokens

I had this bug in the playground, which I reported - but got no help.
But now the same occurs in the API.
When I call my fine-tuned model the response is always 100, and not way to increase it even I declare it.

Did you fine-tune with < 100 word answers?

What’s your system message look like?

What model did you fine-tune?

1 Like

I fine tuned 4o-mini as I’m on T2 with much bigger answers than 100.
I left system empty to try it out - In the finetuning all prompts have system content.