Fine Tune GPT-3 without prompt?

Hi! I was wondering if it’s possible to fine tune GPT-3 without using the “prompt” and “completion” method as shown in the documentation. More specifically, I want to fine tune a GPT-3 model to match a specific writing style. My first guess was just leaving the prompt empty while fine tuning, so something like:

{"prompt":"", "completion":" <fine-tune text>"}

My second guess is to split each entry in the dataset into half, which is something like this:

{"prompt":"<first half of fine-tune text", "completion":" <second half of fine-tune text>"}

But I don’t know if it’s best practice to keep prompt length the same as the completion length? Or how many tokens to use in each? Or even to standardize it (e.g “Always use 1,500 characters in the prompt and 1,500 in the completion”), or whether to make it random.

Any advice on how I should aim to train this would really be appreciated, thanks!

1 Like
1 Like

I am too interested in this.
My first thought would be to fine-tune the model to know that specific writing style.

So the first step would be to identify characteristics of that specific writing style.
For example a writer always uses sarcasm when referring to people.
In this case, a line from fine-tuning would have:
prompt: John Doe is a writer who uses sarcasm. John Doe writing style example:
completion: Jack was the richest guy in the classroom.

After fine-tuning, your prompt should look like this:
Prompt: write a paragraph about Franz Kafka in the style of John Doe.
Completion: Kafka had the greatest humor sense.

This is just an idea, so you have to test for yourself if you want.

1 Like