We’re working on adding a fine-tunable version of text-davinci-002.
We likely won’t have a fine-tunable version of each base model (e.g. text-davinci-001, text-davinci-002, etc.), we’re instead focusing on delivering the best fine-tuning model for each given capability level.
‘Continue fine-tuning from a fine-tuned model’ feature is a type of continuous learning or the new data for the second iteration of fine-tuning will be merged with (cached) previous data used for the first iteration of fine-tuning and then the merged data will be used to fine-tune a pre-trained model?
Can a fine-tuned model be reasonably fine-tuned again indefinitely, or is doing so likely to result in problems? Basically, can fine-tuning be used for iterating learning?
Any update on when this might be available? My company is in the process of developing a fine tuned model, but we’re realizing that fine tuning davinci isn’t financially justifiable since text-davinci-002 delivers equally good (and sometimes better) results than the fine tuned model (which is more expensive to use). Any advice or insight would be greatly appreciated, thanks!
I have tried and realized that this is useless! I thought it would give the ability to extand the existing data of the previous fine-tuned model but it re-starts a new other model from scratch. Means the previous dataset of the fine-tuned model is not included