The latest deprecation announcement, makes it sound like several models, like ft-gpt-4.1-nano-2025-04-14 are being shut down. In that particular example, it says to use gpt-5-nano instead. But I have no ability to fine tune gpt-5-nano, mini, or regular gpt-5 still. Will access to this be rolling out soon? Or am I misreading the deprecation announcement? Or are you actually removing fine tuning support?
Good question!
I have pinged the team for clarification.
Yeah, what about fine tuned model? Does our fined tuned model also be deprecated too? Or is it just the base model that get shutdown?
Hello,
You’re not misreading the shutdown notice: fine-tuned versions of gpt-4.1-nano-2025-04-14 are scheduled to be removed on October 23, 2026, and the deprecations page lists gpt-5-nano as the recommended substitute.
The “substitute model” column is a migration target for using the API after shutdown; it does not currently mean that gpt-5-nano is available for fine-tuning. Fine-tuning itself is not being removed, but GPT-5-family models are not currently listed in the public fine-tuning docs as supported fine-tuning base models.
If you need a fine-tuned replacement today, please use one of the currently supported fine-tuning models documented in the fine-tuning guides. If your goal is specifically to migrate from a fine-tuned gpt-4.1-nano model, we recommend evaluating base gpt-5-nano against your existing evals/prompts, since that is the documented substitute for that deprecated model family.
Reference:
- Deprecation page: developers.openai.com/api/docs/deprecations
- SFT docs: supervised fine-tuning
- DPO docs: direct preference optimization
- RFT docs: reinforcement fine-tuning
Well, this is very uncool. We’re using nano specifically because the response times are much better. gpt-5-nano absolutely will not do our classification correctly without fine tuning.
We all know that gpt-5 fine tuning exists and a bunch of people have access to it already (people have published papers!). Why can’t the rest of us start fine tuning on gpt-5 (nanon/mini/regular)?