Public repo of my finetuning data

Since finetuning is a hot topic I figured I’d share my data. Most of this has been experiments leading up to further my research or side projects I’ve worked on. I figured I’d share to help the community better understand how to perform finetuning. The JSONL files are included so you can just grab them and go. This repo is under MIT license, so do whatever you want with it.


Thanks for this! I have just started down this path for our own chatbot (in a library) and this is a great shove in the right direction!

1 Like

Is this a public library setting? If so, I know someone you probably will want to talk to.

University Library, but, if successful, I’m sure it’d be useful for Public Libraries as well!

@adallara is working on a related thesis right now.

Thanks for sharing your data like this! That is very helpful. I see that you’re including the explicit instructions for each tuning output, but would you ever include your instructions as one of the fine-tuning prompt files themselves? I’m trying to figure out how to incorporate explicit instructions I have for a game into a fine-tuning model format.

1 Like

I’m not quite sure what you mean. Can you show an example?

Yeah, like for example you’ve got syn_prompt2 which you use in every prompt to direct the response for your tuning data. I’d call these the “instructions”. I was wondering if you’d ever include just those instructions in a prompt line.

For example, teaching the engine to play “Mastermind” I’d first give it a paragraph of instructions on how the game is played before moving on to show actual examples of playing in subsequent prompts. If I wanted a fine-tune for this, how would I incorporate the “instructions” paragraph. I tried once including it in as a prompt with a response of blank, but didn’t find it worked very well. Is that maybe not a use case for tuning data? Is there a better place to put explicit instructions?

If I understand you, then you don’t need the entire set of instructions. The model will implicitly learn the instructions - even if it’s something like chess or mastermind. Remember that GPT3 has already read a lot and knows the rules, speech patterns, etc. Fine-tuning just ensures that it will consistently use the embeddings it’s already got. You’re not teaching it anything new, you’re just making it practice using a skill it’s already got.

Ah ok, that makes sense. Thank you for your insight!

1 Like