What are the example token length limits for fine-tuning?

Some time ago, there were limits listed on how many tokens a training example for fine-tuning can have for different models. Theese limits seems to have vanished from the documentation, I cannot find them anymore.
So I thought then the limit must just be the models context length. But no, I just go an “Example too long” error when trying to fine-tune gpt-4.1, which has a context length of 1M tokens.

Where can I find info on the maximum length of a training example for fine-tuning for different models?

1 Like

I had the same question.

I couldn’t find any cached versions of the old documentation - they don’t seem to have been captured on the Wayback machine for example.

A youtube video from ~10 months ago with the title " OpenAI API: Fine-Tuning Models, Part 3 - Validating Data Files & Cost Calculations" seems to have screenshots of the old docs.

The limit for gpt-4o-mini appears to have been 65536 tokens at that time with. a notice saying 128,000 was “coming soon”.