Fine tuning is beautiful

I’m surprised I haven’t done it earlier. It was a bit hard to set up on MacOS Monterey because of poor python support, took a few weeks to figure out. This video really helped me: GPT-3 Fine-tuning and interactions - YouTube

I actually thought the instructions were to paste the commands into the Terminal (like typical npm stuff), but I actually needed to set up a python virtual environment.

Anyway, the quality of prompts have improve massively, with fine tuned curie prompts having higher quality output than few-shot davinci prompts.

The pricing page was also misleading. I assumed that if I had trained it on 50,000 tokens, it would charge me something like $1.50 for davinci each time I run a prompt. But no, it’s only for the first time it trained. I’ve managed to shorten the prompts themselves from 450 tokens to 150 tokens, so the price discount kicks in fast. But it’s good enough on curie, that it saves further cost (plus the faster times!).

And the amazing thing is actually being able to train it on a thousand examples at very little cost.

I’m going to start fine-tuning all the prompts I use often. I spend most of my time on content generation, so this is quite exciting as it opens up many new avenues. Even if you wanted to build something like a chatbot, which would previously be too costly.



Could you share some JSON examples on how do you finetune your prompts?

I have tried it, but I got worse results…

1 Like

Alas I can’t find the video.
How did you structure your Excel file?
One post per each cell, or have you splitted each post into sections?

Here’s an actual example for the JSONL files, something I’m using for another product:

{"prompt": "energetic -->", "completion": " ${p.Sdesc} was always an energetic person. ${p.She} never really knew ${p.her} limits when it came to staying up late or doing something for the first time, and ${p.she} didn't care much about the consequences either.###"}
{"prompt": "lethargic -->", "completion": " ${p.She}'s not sure what ${p.his} problem is, but lately ${p.she}'s just been feeling more and more exhausted. It's a struggle to get up in the mornings and even if ${p.she} does make it out of bed, ${p.she} feels like a zombie.###"}
{"prompt": "talkative -->", "completion": " ${p.Sdesc} chatters incessantly. ${p.She} talks about ${p.her} family, the gossip among the upper class, and the latest trends among the lower. ${p.Her} conversation topics range from what ${p.she} ate for breakfast to ${p.her} latest vacation.###"}
{"prompt": "leader -->", "completion": " ${p.She} had always been the one in ${p.her} family that was able to take charge. ${p.She} made decisions, because ${p.she} knew what needed to be done. ${p.She} never asked for help with anything, and never listened to anyone's opinion but ${p.her} own. ${p.She} didn't see ${p.herself} as a leader, but ${p.she} knew when to step up and take control.###"}
{"prompt": "optimist -->", "completion": " {p.She} is constantly building bridges between two sides. ${p.She} never takes sides and always tries to find the good in everyone. ${p.His} optimism and positivity has made ${p.him} popular.###"}
{"prompt": "emotional -->", "completion": " ${p.Sdesc} was always passionate about everything ${p.she} did. ${p.His} emotions were always close to the surface. ${p.She} was never afraid to express ${p.his} feelings, even if it meant getting into arguments with others. ${p.She} was always fiery and intense, both in ${p.her} words and ${p.her} actions.###"}
{"prompt": "hot-blooded -->", "completion": " ${p.She} is always the life of the party - always ready for a good time, and always up for a fight. ${p.She} is passionate about everything ${p.she} does, and ${p.she} doesn't hold back when it came to expressing ${p.his} feelings.###"}
{"prompt": "warm -->", "completion": " ${p.She} never seems to stop being happy. ${p.She} is always smiling, never too tired for a laugh.###"}
{"prompt": "chatty -->", "completion": " ${p.Sdesc} chatters incessantly. ${p.She} talks about ${p.her} family, the gossip among the upper class, and the latest trends among the lower. ${p.Her} conversation topics range from what ${p.she} ate for breakfast to ${p.her} latest vacation.###"}

Note that the prompt has an ending -->. The completion has a space. The end of the completion has a ### which I’m using as a stop.

You can use the tool openai tools fine_tunes.prepare_data -f <LOCAL_FILE> to check the correctness of your inputs.

I personally prefer to directly write the jsonl file because I have more control, and I find the plaintext easier to read than excel cells.

After that, pick the tuned model and run it with a prompt, e.g.

fearless -->

curie output: ${p.She} never backs down from a fight. ${p.She} is always ready to stand up for what’s right, even if it means standing up to people who are better than her.

davinci output: ${p.She} never backed down from a fight, no matter how bad it seemed. ${p.She} never let ${p.his} fear stop ${p.him} from doing what ${p.she} thought was right. ${p.She} never worried about what was going to happen, only about what was happening right now.

There is a difference in quality, but it seems quite minor in this situation. Also apparently quality goes up every time you double the number of samples… you can even use it to generate samples if you like and then fix the output to your liking.


@smuzani thank you for your example.
Another question: have you compiled it manually, manually adding strings such as “prompt” and the stops?
Or there is a more automatic way to do it?

I believe the automatic way is to use openai tools fine_tunes.prepare_data

1 Like

Thank you so much for providing examples!