Some question about fine-tunning

I trained gpt-3.5-turbo with fine-tunning and got the model name and job id. I have a few questions regarding this situation:

1- Can I add new training data to the same model I trained?
2- Can I extract some training data to the model I trained?
3- What is the life expectancy of this model I trained? So will this model always be available? Is there a chance to somehow backup this model even if the endpoint is not removed?


  1. Yes you can retrain already trained models, simply use the trained model name as the model to be trained.
  2. I do not know what you mean by extract, could you clarify please.
  3. The life expectancy is unknown, there have been no announcements regarding it, I would make the assumption that it will be for at least the life of the model it is based from.
1 Like

1- I use the following google colab to train the model. After model training is completed, it gives me a new model name. So when I entered “gpt-3.5-turbo” as the model to train, it gave me the new model “ft:gpt-3.5-turbo-0613:personal::abcdef”. If I enter the model ft:gpt-3.5-turbo-0613:personal::abcdef as the model to be trained, will it give a new model name as output? If it gives a different model name, it will naturally change at the endpoint. Isn’t there any way to prevent this?

2- I wanted to say, if I entered the wrong data into the model, can I remove this data?

  1. Not tried that, so you’d have to make a small test experiment to see if the old name gets updated or if you get a new name from the operation, either way you get the additional training.

  2. You cannot undo/delete training information.

I trained the gpt-3.5-turbo model with only 1 data. I noticed that the answer speed of this model I trained to any question was much faster than the normal gpt-3.5-turbo model. What is the reason of this? Is this because many people use the gpt-3.5-turbo endpoint and my model is only used by me?

A gpt-3.5-turbo fine tune does generate about 50% faster for me. You make reasonable guesses, but nobody at OpenAI is going to answer why models are faster or slower.

The tradeoff currently seen with fine-tune can be many seconds of latency, randomly, in beginning streamed output.