Are fine-tuned models a good way to give GPT a specific tone of voice?

For example, imagine I maintain a huge amount of documentation for an in-house app. I can log in to chat-GPT, and have it create documentation for new endpoints easily. I can just paste in the code, give it some extra context, and it’ll generate some shockingly good documentation.

But the documentation doesn’t sound like me, it sounds like chat-GPT. Of course you can tell it in the prompt to write “like a drunken sailor”. But if I don’t copy-paste in a bunch of my own docs for it to use as an example, it won’t be able to know what I sound like.

Would a fine-tuned model work well for this sort of thing? I don’t have a specific task I want it to be good at; I want to do what it normally does, but with my tone of voice.

Fine-tuning can indeed be a good way to give GPT a specific tone of voice. By fine-tuning the model on examples of your own writing, you can train it to generate text that aligns more closely with your desired style and tone.

However, currently only the text completion models are fine-tunable (it’s been suggested the newer chat-completion models will be fine-tunable soon), so you may want to hold off on the fine-tuning those older models and instead work on refining a “like a drunken sailor” prompt. A few paragraph voice guide can go a long way towards generating text that sounds more like you.


Ok gotchya, thanks for the input! The more I learn about fine-tuning, the more I’m realizing its potential. I bet gpt-4 fine-tunes will be wild

1 Like

So, how would the training data be formatted then?

Because you need a prompt/completion combination like:

{prompt: "xx":"completion:"xx" }

The training data that I for instance have available on my own writing is just long lines of text.

What would go in the prompt, and what would go in the completion if you only have sentences to work with?

Thank you so much!

@curt.kennedy gave me this answer in another thread.

"Suppose you have a large corpus of “your style” text. Either AI generated, or your own writing. So you take, say, each sentence, paragraph, “chunk” of this text, and send it to the GPT-X neutralizer (Playground link above). Then creates the corresponding neutral text. So you now have “Styled”/“Neutral” pairs.

So the training is on all the Neutral chunks, and the target output is the corresponding Styled chunk.

So for your particular case, you could skip the neutralizer. But in general, for non-AI generated text, you do need some sort of neutralizer or “control”.

So … having said this … all you need to do is create a training file with JSONL lines:

{“prompt”:“Ai Generated Text\n\n###\n\n”, “completion”:“Rewritten Ai Text In My Style”}

So now anything coming into your final fine-tuned model will have an output similar to your styled version. You have to “impedance match” or have the input as similar (in style, tone, etc) as your training input “Ai Generated Text” for best results. But you can also try without too.

The completion is always in your target style.

For current base model fine-tuning and ops, don’t forget the ‘\n\n###\n\n’ markers and such."

Specifically, you need to train with data that has a unique stop phrase at the end of the prompt that tells the AI it is time to answer and particular separator (docs below). That’s just the basics of making it work, which can be validated with the tools.

The fine-tuning guide gives an example like this, but I don’t like it so much, it doesn’t match the leading-question completion prompting style already easiest for evoking chat-style answers out of untrained GPT. Placeholders aren’t as good as real text. The particular example, showing multi-turns, also shows a different final AI prompt stop than what it was previously shown as its own output (out of necessity).

{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent:", "completion":" <response2>\n"}

You can have AI generate synthetic questions. “write the best short question where this is the answer”. Synthetic stimulation that evoke the output. I’d put in a lot of crude hand-generated user inputs though, typical of users, “how to catch monkey”. And other behaviors besides question-answering.

In a particular case of your writing’s examples, input can be the language that would make an AI write such an example for the user. If all the tuning is just about writing like you, the tuned AI won’t be able to do much other than compose documents, though.

Reference documentation: