Fine tuning - understanding the training loss of my FT task

So I have fine tuned a model for a task of ‘writing style transfer’. More specifically, my training input/output pairs look like this:

["Revise the following 'email' so it reads more engaging and convincing. DO NOT change the subject, topic focus, or the story if any. Focus ONLY on the style and tone. \n\nEmail:\n```{append email}```,
"{expected style-transferred email}"]

The emails range between 300 to 500 words. And I have about 600 of such pairs. Apart from the training data, everything else is default setting. So I did not change the default epoch or metric. The fine tuning job was successful.

I have some concern about the ‘training loss’, however. Through the dashboard, I can see the training loss did not converge:

So I want to ask

  • should I increase the epoch number
  • should I provide more training data pairs

Either way, I suppose I should try to train the model until the training loss converges?

Thanks

No don’t worry too much about that graph, it’s more relevant for larger datasets :laughing:

2 Likes

Agreed. Check how the model works in practice and based on that make a call on whether to finetune further.

3 Likes