Hi everyone,
I was thinking about how you’d go about fine-tuning GPT3 to write newsletters with different tones and writing styles. I was thinking of building a dataset with about 1000 newsletters containing about 5 different tones to act as completions. I was then going to use a structured approach to developing the prompts i’d use to generate those newsletters myself with reference to the tone and writing style.
I have a few reservations before I attempt this.
Firstly, I can see how fine-tuned models are very effective when it comes to classification or categorisation tasks. However, with completions as long as 200 words it feels like there will be a lot of noise when trying to fine-tune a model to capture the tone and writing style of different types of newsletters. The prompts I would also need to generate would also be relatively long and likely introduce further noise.
Secondly, if I have a dataset of 1000 newsletters at 200 words each, I imagine the token cost for generating the model would be quite expensive. Seems like it would be relatively high risk for something I’m playing around with in my free time.
Does anyone with any experience fine-tuning gpt models for longer form content have any advice? Any feedback on whether my aforementioned concerns are valid would be greatly appreciated.
Apologies if this topic has been covered before, I did look around for a similar topic.
Best,
Declan