How to Fine-tuning a fine-tuned model

Hi OpenAI team

I have a question about fine-tuning a fine-tuned model.

Is it possible to fine-tune a model that has already been fine-tuned? For example, if I fine-tune a model with dataset A and then fine-tune it with dataset B next week, will the model be able to handle both A and B datasets?

If not, how can I implement the method described in the example? Should I keep accumulating datasets and fine-tuning the model with the accumulated data, and then use the generated model?

This would seem to lead to a large amount of data for fine-tuning in the future.

I would appreciate your help on this matter.

Best regards

That type of continuing training is possible on an older completions model (going away January 2024), but is not a feature yet implemented on the new endpoint for fine-tunes.

One can only specify the complete fine-tune job, and then test the results. An enhancement feature to your bill.

1 Like