I fine-tuned the babbage OpenAI model with my custom data (100K prompts). I achieved almost satisfy results but I want more, how to fine-tune the same model for second time with more 100K prompts without data loss and overtraining?
Thanks in advance!
There currently is no continuing fine-tune on the new endpoint for the just-released models. “coming soon”.
It’s currently not available per se as a feature. Why not fine-tune it from basic with 200k prompts. Personally, I’ve never been too satisfied with the results of fine-tuning an already fine-tuned model.
because 100k is the maximum (50000000 tokens)