Fine-tuning Tutorials

Hi all, I’d like to work on more end-to-end fine-tuning tutorials. What use cases would be highest priority for you?

7 Likes

Factual answering with recent data, this is probably difficult

7 Likes

I’d love something covering the use case of fine-tuning by using a bunch of output examples without prompts. The results are amazing (and way cheaper to execute than corresponding prompt-engineering) for relatively low effort for a ton of use cases.

Roughly following the current guide I still went in to implementation a bit unclear what the structure for them should be (even that a prompt/key was required, and that an empty key would give great if not better results than picking a prompt/key anyway)

Including sample JSON / CSV (with header row) / JSONL snippets would help a ton too.

6 Likes

I’d love one on fine-tuning codex models once that’s possible, since I think that would crack open expressibility in some code-application domains that didn’t happen to have instructive samples scooped up in initial training.

5 Likes

I have been thinking about how fine-tuning is perhaps one of the most critical steps towards AGI. It allows a neural network to continue learning as the larger AGI system accumulates more knowledge and experience.

How does this insight translate into a tutorial?

Pipelining and preparing data. I’ve had plenty of practice with scraping data from various sources and cleaning it up or preparing it for different purposes, from GPT-2, to SOLR and now for fine-tuning. Data prep is a specialty. I think that if you were to produce some tutorials about handling data, cleaning it up, and creating robust datasets, that would make all the difference for usability of fine-tuning endpoints.

8 Likes

Thanks @daveshapautomator - would you be interested in creating / contributing any such materials by any chance?

1 Like

I’ve created training materials before and I can probably clean up code that I’ve already got. I think it would be an honor and a privilege to continue. Let me know what you have in mind!

4 Likes

I’d like to see some approaches and tests of multi-style prompt fine-tuning. With this I mean: a model which gets fine-tuned to respond to three or four distinct style prompts, each prompt intended for different completion style.

4 Likes

Do you have any example use cases in mind?

Hi , tutorial on non-classification text models would help (like Question answering)

3 Likes

Paraphrasing, rewriting sentences, possibly in different languages, please :slight_smile:

4 Likes

Thank you all! Will look into all of these.

3 Likes

Hi, I know I’m a bit late but a conditional generation tutorial would be very helpful for us. I have been trying to develop a fine tune model for legal cases data extraction e.g. given some context e.g. the legal case, extract the sequence of events.

2 Likes

A list of example uses would be: Abstractive Summarizer, Narrative Completion* (either news article or story, maybe a paragraph at a time. probably mention copyright and that overfitting here could cause it to just spit out the input text.) Answering Questions From Wikipedia (or for a specific document).

Another that might be fun but more educational would be something like a Mad Libs / Fill in the blank example (fine-tuning it to work as a fill-mask model), where it replaces any [MASK] tokens in the text.

I think it would be helpful to include a line from the training jsonl file, approximate size of the dataset, the settings for each hyper parameter, and why those settings make sense.

For example if the learning rate is set to 0.2, mention something like:
“Because our dataset is 400mb, we’re training for 4 epochs. Generally you want these two to be proportional.”

Insight into the prompt-loss-weight field would be particularly useful, especially for the summarization example.

Edit: Maybe something about multi-task learning / fine-tuning it to do 2 similar tasks.

1 Like

A web3 bot would be super helpful, a lot of people are super excited by Web3 but don’t fully understand blockchain Defi DAO’s etc. myself included. There’s so much buzz around the topic its hard to find reliable educational resources.

1 Like