Do you change hyperparams when re-finetuning an existing fine tune?

For example:

if fine tuning an existing fine tune on <50 examples, I set # EPOCHS to 1.

But you can also decrease the default learning_rate_multipler, or change batch size

There’s only 3 hyperparams you can tweak, but it seems particularly useful when finetuning a fine-tuned, as I’m not sure the system picks up identifies it as a re-fine-tune when left undefined

What do you find works?

Continuing a fine tune builds upon the weights and training already present in the model. The previous fine tune machine learning is not re-done.


But it seems you might not want to leave the defaults present if you seek to, for instance, fine tune using a smaller dataset to an existing fine tune.

Hence the param change