OpenAI is winding down the fine-tuning API and platform - Discussion Thread

So if i have fine tuned models on gpt4.1 mini and openai deprecates it does this mean my model will never be used for inference? So i would have wasted money and compute on it? For SFT only gpt4.1 variants are available and for RL only o4-mini is available so if im not wrong the degree of freedom to test and fine tune is anyway limited.

Also if GPT5.5 onwards models will be good at instruction following what would developers do with the money they have on the API Platform? As inference costs get cheaper and cheaper due to data center expansions and demands continue to increase will all inference be done using the same models released by OpenAI?

o4-mini already has a shutoff date later in 2026. That was the first sign that fine-tuning was doomed.

gpt-4.1 series has not appeared in the deprecation list with a shutoff date. I anticipate you will have six months of deprecation notice before shutoff, and like the notice says, model shutoff is fine-tuning model shutoff when based on that model.

You choose which model you use on the API. You run a particular model name you have trained by API specification. To use a fine tuning trained model for inference generation, you specify the job-generated name with your prefix after it is created. Until it is turned off by OpenAI or you delete it.

OpenAI pricing has not become cheaper for API developers with new release models in 2026, instead, huge hikes. For now, there is a wide variety of models to run an API call against, as long as you accept that there is no aspect of machine learning experimentation and nothing emergent, novel, inspirational, educational to come out of these boring consumer products again.

It seems they have given up on ft for now, at least until we get some Stargates up and running with some gpu power to spare…