After a few days of trying. Fine-tuning an existing fine-tuned model does not mean adding more example to the fine-tuned models. It starts another new model from scratch, the same as fine-tuning a base model.
I thought it would be possible to add new examples to the previous fine-tuned model’s data if you fine-tune it again and again. It should be like this to avoid paying for the whole data again after you add some new example to your file later.
According to the documentation you can do this:
https://platform.openai.com/docs/guides/fine-tuning/advanced-usage
"
Continue fine-tuning from a fine-tuned model
](OpenAI API)
If you have already fine-tuned a model for your task and now have additional training data that you would like to incorporate, you can continue fine-tuning from the model. This creates a model that has learned from all of the training data without having to re-train from scratch.
To do this, pass in the fine-tuned model name when creating a new fine-tuning job (e.g. -m curie:ft-<org>-<date>
). Other training parameters do not have to be changed, however if your new training data is much smaller than your previous training data, you may find it useful to reduce learning_rate_multiplier
by a factor of 2 to 4."
Do you mean it’s an ammendment of the previous data instead of replacing it from scratch?
An ammendment of previous data.
My goal would be to get a feel how much adding more data improves the models output.
@ceesmeuleman Related question, does fine-tuning a fine-tuned model consume the original model? Or does it retain an original copy and create a new, improved copy? Thanks!
yes, I would also like to know if finetuning a previously finetuned model create a new model or does it update the previous model? So could we use the previous model in production while we test out the new version of the model?
Hi and welcome to the forum!
Presumably, you could train a model with your current dataset and have that as your production model, then train another model with that same dataset and then add additional training data to the run. That would create a safe version of what you are wanting to do.
Thank you!
oh but that will add to our cost as we will have to use the initial training data twice. Is there any way to avoid this and have versions of the finetuned model that we can continue to use?
I’ve honestly not tried it, my guess is that it will further train the existing model, so I would not rely on that for a production environment. I understand it comes with a cost implication, but as I’ve not performed this myself before, I’m erring on the side of caution.
Alternating models might help otherwise. Train two on initial dataset, then put one in production and add data to the other until it is fine for production. Then deploy that and add the new training set to the former production model…
Just in case there is no other solution (I don’t know how it works in that regard).
I mean from the docs it states that there will be a new model when you train a finetuned model further, but I havn’t tried to do so, so I don’t know if it keeps the previous version.