Can you fine-tune a fine-tuned model? Limit of fine-tuned models?

Hello! Can I use a previously fine-tuned model as the base model for a subsequent
fine-tuning job, or must I always retrain from scratch using the original base model?

Also, is there a limit of fine-tuned models we can have per account? If yes does anyone know the limit?

Thank you so much!

3 Likes

Hi and welcome to the community!

Yes, you can continue fine-tuning a model. During training, checkpoints can be created and later used as new starting points for further tuning.

I recommend checking out the documentation and the cookbook to explore your options and understand why ongoing fine-tuning can be useful for specific tasks. The link above focuses on supervised fine-tuning, but the general approach applies across different methods.

Hope this helps!

2 Likes

More precisely:

When specifying the model name that you want to train with supervised fine-tuning, instead of the source model name such as "gpt-4.1-mini-2025-04-14" as the “model” parameter to the API call, you would employ the resulting output model of a previous fine-tuning, a model name form such as "ft:gpt-4.1-mini-2025-04-14:our_org:prefix:13358ad"

Notable (but un-noted in documentation) is that each fine-tuning has its own adaptive learning rate based on the size of the training file, so that the effect of 10 examples is still seen, while 10000 examples isn’t overpowering. There is no awareness of existing learning weights. This means that “continuation” can act more like “overwriting” if you don’t reduce the learning rate multiplier from its default of 1.0 or reduce the default n_epochs to a low number like 1 or 2.

As just mentioned, there are a few checkpoint steps also generated midway during training that can be directly used for inference, with an additional suffix such as :ckpt-step-250. These are not necessary for fine-tuning as input, but instead, allow you to see the results of partial training passes if the final model you get seems overfitted and unable to infer well beyond the examples.

1 Like