Hello everyone,
This issue has to do with fine tune models for a text classification exercise. I read the documentation and followed it as it suggested. I was trying to refine tune an already existing model; However, after a couple of refinements, the model started predicting labels which I have not included in the training phase. I encountered this problem twice, so I will give the context in both experiments.
I also need to mention that the retraining dataset was much smaller than the original. I read that this is possible and the openai engineers suggest a reduction at the learning rate variable by a factor of 2 to 4.
1st case
I retrained a model using the labels " Ir" and " Re" (suffix separator: ā ->') and it started predicting labels relevant to the context This one is not a major issue since one of the two is present.
2nd case
I retrained a model using the labels " false" and " true" (suffix separator: ā\n\n###\n\nā) and it started predicting weird tokens (like \n). This is a major issue since none of the relevant labels is present.
You can see the first case on the left and the second on the right (sorry for using a single screenshot, I cannot upload two).
I need to underline that in the second case, a substantial amount of such incidents occurred. Also, in the second case, it was after the 2nd retrain while at the 1st case, it was after the 5th retrain. Furthermore, I used logprobs = 2 meaning that it is a binary classification and I need the log probabilities for 2 labels.
Any thoughts on this?
Thank you in advance!