Hi Everyone, I’m very new to GPT3, but after watching @daveshapautomator youtube channel on finetuning. I feel like I want to try my hand on fine-tuning my own model.
I have this Idea I want to try to create, the funny uncle everyone has, but have him help rewrite my own writing and emails. I’m aiming at a very specific style and tone of voice which is a bit hit and miss with prompt engineering.
I have collected a bunch of text that falls within the Tone of voice and writing style I’m looking for. They’re all of varying lengths and coming from multiple sources. I’m just a bit unsure how to approach the construction of the prompt in my training set.
I assume the completion should be the texts I’ve collected, but how would I go about getting a prompt?
Should I literally just go with some prompt engineering try to reverse engineer my sample texts to ‘non funny’ and use those as inputs?
Sorry if this seems very obvious to someone, I’ve been trying to figure out how I would go about this, but sofar have run into dead ends. Sorry if this is super obvious to someone, I’m just trying to learn
For embeddings, I haven’t posted much about it, but I am thinking of the standard Q&A approach found here, and ask GPT-3 to summarize in some way that will preserve the tone. Where the data embedded is a bunch of things that person would say/think/believe.
This might be completely obvious to everyone who knows more about GPT than me. But say I found a bunch of product descriptions from the Palace skateboards webshop and trained a model on that.
Fx
Input
Product description of brown shopping bag in corduroy
Output
CORDUROY SHOPPER BROWN
I FEEL LIKE PEOPLE CALLED ROY
DON’T REALLY MESS ABOUT WITH CORDUROY
Then I assume the model would be good at making similar product descriptions, but would it talk in a similar way about anything else?