Email generation use case - prompting or fine tuning?

Hi, I am new and still learning how to use GPT and would like some guidance on how to best approach my use case.

My use case generally is to write different marketing emails depending on occasions and product types. Imagine I am Amazon and depending on the occasion (e.g., christmas, valentines day, black friday), the types of products (e.g., electronics, fashion, grocery), and consumer segments - let’s call all of these as ‘email attributes’ - I want to generate emails in different tones, length, and for different products. I already have hundreds of thousands of emails and their labelled ‘attributes’. I suppose these can be used as data to train/prompt the model.

I am wondering how I should approach this task using either prompting with few-shot learning, or fine tuning. I read OpenAI Platform it suggests only the base models can be fine tuned , and that ‘These are the original models that do not have any instruction following training’. So my first question is: does this mean that if I fine tune my own model, I cannot prompt it as if I do with Chat GPT (i.e., producing a series of instructions to refine the output)?

My second question is, if I use fine tuning, what is the best way to prepare the prompt/completion pairs? Ideally, I want to list the email attributes and a few words about a product and these should then map to the email in my dataset. But I don’t understand how to formalise it, as it’s not a classification task.

My third question is, how to achieve the same goal using in-context few shot prompting? I imagine that given the desired email attributes, I need to provide a few example emails, and ask Ghat GPT to produce an email in the similar style to the examples, according to the email attributes. Does that make sense?

My last question is if you can give me some general tips in addition to what’s covered above.

Thank you for reading this and thank you in advance for any suggestions!!!

2 Likes

Based on your use case, as I understand it, here’s how I would implement it:

  1. User submits or selects the type of marketing email they want to send.
  2. It will browse your email database for correct type via some API.
  3. User will be presented the selected email or sets of emails that user can choose from. These are still raw emails from your database.
  4. When the user selects an email , they can then instruct ChatGPT to modify it to anyway they want (e.g. change tone, etc.).

It seems prompting is all you need.

3 Likes

Many thanks! I think I will start with this approach.

Can I ask if my understanding regarding my first question is correct? I.e., fine tuning a model can only make the model do one thing in the way modelled by your data, that is the prompt/completion pairs. And you cannot instruct it afterwards like chat gpt does?

Thank you again!

Correct.

For fine-tuning, your prompt/completion pairs might look like this:

Prompt:

products: Christmas lights, gift wrap, tape, scissors
tone: excited, jolly, humorous
length: short
-> (your separator)

Completion:

Subject: Get your Ho, ho ho, ho-liday essentials
Body: Ho ho ho... Merry Christmas... we've got the goods here.... [the email content]
\n\n###\n\n (your stop sequence)

You would prompt the resulting model in the same format. So, you can see, you do not need a verbose prompt like you would provide to ChatGPT for fine-tuned models.