Does fine-tuning limit a models' useful?

Hi all, I have a fine-tuned model. It’s working as documented.

But as I’ve gone deeper and deeper in to fine tuning, I’m wondering if there is such a thing as over fine-tuning or, another perspective, if fine tuning can limit the otherwise very flexible base model?

Yes, it’s called overfitting, and happens when you train with high epochs. If you overfit, the model loses its ability to generalize.

2 Likes

Yes, it is quite easy to overfit a model during fine-tune.

One might make an AI that can only say things like “I’m sorry, I can’t assist with that”…

The sweet spot is where it has the IQ to do things similar to what you trained on without seeing them before, in the same manner.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.