I have several finetuned models which was finetuned for two epochs. Automatically the system creates a checkpoint after the first epoch and i can use it for inference as normal finetuned model with just one epoch. But when i put this checkpoint as a base model for further finetuning(I dont like how the 2-epoch model works and i want to fine tune the model after the first epoch with different hyperparams), the API says that this model doesn’t exists.
I am aware that the checkpoints are different objects, but i can’t find any way to use them as a base model for further finetuning.
Being denied the ability to use a checkpoint as a base model for fine-tuning continuation kind of answers your own question for others.
Without OpenAI insider feedback, one can only read between the lines of documentation (all collected in one place), and try to imagine the nature of a checkpoint ending the model ID with " :96olL566:ckpt-step-2000" and if it is the model weight type or the API not expecting the name that prevents use.
You can also fine-tune a fine-tuned model which is useful if you acquire additional data and don’t want to repeat the previous training steps.
Currently, only the checkpoints for the last 3 epochs of the job are saved and available for use. We plan to release more complex and flexible checkpointing strategies in the near future.