Davinci Fine-Tuning?

I’ve been working on a songwriting assistant for a while with some really cool results. However, I feel like it could really be augmented by fine-tuning with Davinci. My Few-Shot results have been great, however, sometimes it becomes a bit leftfield when I create more specific prompts and restraints. I did a curie fine-tune that is producing okay, but repetitive results.

So, I’ve applied for access to the Davinci fine-tune. Is there any way to check on the status of my application?


I found that a well fine-tuned CURIE can outperform DAVINCI few-shot examples. You might want to take a second look at your data and hyperparameters. How many samples are you using for the finetune? What temp, top_p, and penalties are you using?


So, I’m using over 200 examples, and the temperature is 0.7 & top_p is 1. Penalties don’t seem to change results much.

You may want to change the fine-tuning structure into a conditional generation, which is less likely to be repetitive.

Also 200 is a small amount of data - if you have any way of increasing this, that would greatly improve the performance.


Thanks that’s very interesting! You could use more data - eventually it’ll learn this.

You could also create a simple discriminator based on a deterministic check if a particular completion ends with a desired word. Then you generate multiple completions and pick the one which ends with the appropriate word



I actually currently have that type of logic in the method that calls the API in my Davinci implementation as a failsafe for the few times Davinci doesn’t get it. However, Curie almost never gives me the word at the end of the sentence, so this would cause it to timeout my request counter pretty much every time, and cause big latency issues.

Thanks for the tip, I’ll increase my data! let’s try 10,000 lines!

1 Like

I built a parser that will load up text into the jsonl format. I’m a writer with lots of raw text laying around. So I just load my text into the parser and it slices it up into usable jsonl for the fine-tune job.


That’s a great idea. When I get desired results, I like to feed it back in to the prompt. It definitely wouldn’t hurt to have a few thousand good completions ready to go.


Awesome! Did you open source that parser? Also a writer.

1 Like

We did that some months ago and had groups coming together to sing the songs…

The link is to AI’s Got Talent’ which we ran back in Feb :slight_smile:

thanks, boris. what do you recommend for a minimum amount of data for fine-tuning? assuming there’s no fixed threshold, could you offer any rules of thumb or at least order of magnitude guidance?

Very cool results. The Java song was great!

Hi @chimpsarehungry,

No, the parser isn’t open source. It’s specifically tailored to my use case. However, it’s pretty straightforward to build one, all you need to do is to slice up your text, place it into the jsonl format, and then write it to a new file for uploading to the fine-tunes endpoint.


Thanks. I am confused about the need for prompt + completion in this format. I was used to fine-tuning GPT-2 by just providing lines of text so it becomes more similar to the writing in the training set. If I don’t have a prompt + completion format for this application, what is possible? Maybe just break every sentence in half?



hi @boris i hope your week is going well. pinging again on this message, if you don’t mind sharing your thoughts on the minimum amount of data required to fine-tune.

This guide will hopefully answer your question in more detail. OpenAI API Normally a few hundred examples is a good start, and then you’ll see a linear increase in performance roughly for every doubling of the dataset.

1 Like

Hi! So, what I did that seemed to work very well was to parse sentences two by two and put the first sentence in the prompt, and the proceeding sentence in the completion. However, I’ll bet that splitting the sentence in half would work too!

Ok great! Maybe a combo of 1/4th sentence + 3/4th. 1/2 + 1/2. 3/4 + 1/4th. 2 by 2 like you did. And other combos.

1 Like

I like to try things and log the results!

1 Like

Are you using fine-tuning with Curie? I know there are a lot of lyrics websites out there, you could probably scape those to get a large enough dataset. Even GPT-2 when fine-tuned becomes pretty powerful, that might be sufficient without Davinci access