I have now fine-tuned the GPT-3.5-turbo model.
When I call the API by inputting a prompt to the model,
it only outputs the value for the input and does not seem to be learning the input data.
Is this correct that the model is not learning?
I know that sessions are not maintained when using the GPT API.
That is, the fact that the previously mentioned input cannot be remembered during the next input.
(The context is not continued for each call.)
So I want to continue training the model I fine-tuned with the prompt I entered,
but is that impossible?
If possible, can you tell me how? plz
Yes, you can further fine-tune an already fine-tuned model. For doing so, you follow essentially the same process as for regular fine-tuning. You only need to reference your existing fine-tuned model when submitting the training data.
Unless the new data is considerably better than the original fine-tuning data it’s perfectly reasonable to continue fine-tuning an existing model.
You can look at it from the other side: if we already fine-tuned a model with very good data and got good results then we can improve on this result by training it more with more very good data.
If for some reason you determine that the previous training data was not optimal you can consider starting from scratch.
Let me explain again.
I fine-tuned the model so that when a report is entered into GPT, it is classified into what category of report it is in a certain classification system that has already been established.
Currently, I have learned and fine-tuned the model with about 500 pieces of data. When classifying reports using this fine-tuned model, I wonder if it is not possible to directly train the fine-tuned model on the input report and output values.
Rather than fine-tuning a new model again, couldn’t it be learned just by inputting an already fine-tuned model?