Request: continuous finetuning?

It would be helpful if finetuned models can be finetuned further with more data, ie something like

openai api fine_tunes.create -t <data>.jsonl -m <finetuned-model>

Right now, the API only allows passing base models to the finetune endpoint. In cases where a training set is growing over time, it would be redundant to retrain a base model on the entire dataset from scratch, instead of continuing to finetune the model on just the new slice of data. Retraining can get costly and time-consuming (especially for curie and davinci), and create additional load on OpenAI’s servers unnecessarily.

Implementation-wise, it would only require cloning the weights of the finetuned model into the trainer, instead of using the base model’s parameters. I think this feature would be greatly beneficial for API users, is it possible for OpenAI to implement it?


Continual finetuning is a critical component of my design of NLCA (natural language cognitive architecture). Continual finetuning will be necessary for AGI to be realized. Even if we cannot technically perform continual finetuning right now, we can at least continually accumulate data and periodically perform finetuning operations. For now, this is my stopgap measure in the pursuit of AGI and a machine that can continually learn.

Hey, @daveshapautomator Couldn’t we fine-tune by starting from scratch and updating the base JSON file? Is there really a difference?

Yes that’s what I mean. By accumulating more data, you can continue to integrate more information. I just imagine that there may be more efficient methods in the future, such as repeatedly fine-tuning one model.

@daveshapautomator continuous finetuning is just online learning, except in batches. Continuous training is a very basic and common practice in ML production systems.

@NSY yes there is a substantial difference. Take a look at the finetune pricing docs. Suppose you have a db with 1M tokens, and receive a new batch of 100K additional tokens from new data. Retraining davinci would cost 1100 * 0.03 * 4 = $132, rather than 100 * 0.03 * 1 = $3 (finetune on new data for only 1 epoch). That’s a $129 difference.

1 Like

The difference is that online learning isn’t available for GPT-3 :stuck_out_tongue_winking_eye:

@daveshapautomator read my original post please. This is a feature request lol.

Also changed the title to make things clearer.


I have the same need! Hope they will include this in API soon.