Fine tune multiple models? [resolved]

We want to try out the new fine-tuning work so that we can ultimately build a fine-tuned model. However, our intent is to try fine-tuning a ‘rough draft’ model to see how it responds and how we can improve, and to then fine-tune a ‘final draft’ model for running some GPT-3 experiments.

Does anyone know if we as users are allowed to fine-tune two models, or are we only allowed to fine-tune one?

If we can only fine-tune one model… we would probably fine-tune until we feel comfortable with switching to a final draft, and then we would delete the entire corpus (the data for fine-tuning) and put in our updated corpus. But since fine-tuning models learn, would the model – even with the new corpus – still have learned patterns from the old corpus?

1 Like

Is each fine-tune request for one model, or can we send different requests for different models that we make? For example, I might have ‘rough_draft_1’ and send in three fine-tune requests for that, and then I might have ‘rough_draft_2’ and send in five requests for that?

1 Like

Sorry, I think I’m still not understanding because I don’t understand exactly what ‘request’ means. What exactly is a ‘request’?

Thanks very much m-a.schenk for all of your help, this clarifies a lot! So we essentially get to try fine-tuning 10 models per month, and models #2 and models #4 can be totally different if we want. Am I understanding the example and what you’re saying correctly?

1 Like

That’s great. Thanks again for all the help!

1 Like

I have made an example to fine tune OpenAI on readwise.io here readwise_openai.ipynb · GitHub.
I like to version my fine-tuning dataset using SemVer (readwise-0.0.1.jsonl …) since we can’t give description / proper naming to our fine-tuned models

1 Like

Hi… When you refer to a month does it means 1st-30th of a month or it starts when the user starts to fine tune the models?

1 Like