Hi! I was wondering if it’s possible to fine tune GPT-3 without using the “prompt” and “completion” method as shown in the documentation. More specifically, I want to fine tune a GPT-3 model to match a specific writing style. My first guess was just leaving the prompt empty while fine tuning, so something like:
{"prompt":"", "completion":" <fine-tune text>"}
My second guess is to split each entry in the dataset into half, which is something like this:
{"prompt":"<first half of fine-tune text", "completion":" <second half of fine-tune text>"}
But I don’t know if it’s best practice to keep prompt length the same as the completion length? Or how many tokens to use in each? Or even to standardize it (e.g “Always use 1,500 characters in the prompt and 1,500 in the completion”), or whether to make it random.
Any advice on how I should aim to train this would really be appreciated, thanks!