Fine Tune to emulate my chat style

I need to create a chatbot that talks the way I do.
I was thinking of fine tuning a model.
I’m not certain how I should go about preparing the data. For example, should I include the context of the conversation in every prompt / completion ? Should I add the chat history ?

Every line should be complete. If you want to include the context, it might be something like

{"prompt": "[[context here]] [[conversational text here]]", "completion": "[[your response]]"}

It’s a bit of an art how much you should actually include. It should probably mirror the expected input though. Don’t expect it to tell facts about you, like age, job, etc. Fine tuning could well hallucinate these things.

Considering the large gap between fine tuning cost and “gpt-3.5-turbo”, you might be better off just using lots of messages. The advantage is you can add facts here.

messages: [
        {"role": "system", "content": "You are an attractive and confident gigachad."},
        {"role": "user", "content": "how are you?"},
        {"role": "assistant", "content": "yooo, what's up?"},
        {"role": "user", "content": "asl?"},
        {"role": "assistant", "content": "29 m cali"},
        {"role": "user", "content": "what's your job?"},
        {"role": "assistant", "content": ...},

Downside to this is that it would be fine-tuned to ChatGPT’s prompt and act like a robot if someone flirts with it, lol. It also means you’re already coaching it to respond to one kind of conversation.

1 Like

Thank you so much. I need a flirty chatbot able to answer personal questions ( facts about me ). Would you chose fine tuning or option 2 in my place ?
Should I try both ?

Given the level of cost and effort, you’d probably want to start with the approach @smuzani described. If it gives good enough results, stop there; otherwise, try fine-tuning. And, actually, if that approach doesn’t work as well as you’d like with gpt-3.5-turbo, you might want to try gpt-4 (if you have access to the gpt-4 API, you would just have to change the model in your API call). It’s quite a bit more expensive than gpt-3.5-turbo, but depending on your use case, that might not matter much, and it would still be quicker and easier than fine-tuning.

1 Like

Thanks a lot @dliden I will try that right now.

1 Like

what about if you send a chain of messages at once? how will you get the model to send multiple messages? for example:

person 1: hello
assistant: how are you?
assistant: it’s been a while.
person 1: im doing, fine you?
assistant: actually im well.
assistant: not bad to be honest.
assistant: what have you been up to?

i want to fine-tune the model on my messages, and then talk to myself, my model will then take the place of the assistant as seen above. I dont want the model to send multiple messages as one message, each message will be sent individually.

here is my plan:

I want a large language model that generates responses in a conversational style, taking into account multiple input messages, sentence lengths, and other aspects that resemble the way someone talks. I will be pairing messages together to indicate that these messages were sent in a sequence, reflecting the flow of the conversation. I want to use this method to help the language model understand the context and generate responses that align with the conversation style. I feel this helps the language model mimic the behaviour of real participants, who often send multiple messages at once or engage in back-and-forth exchanges.

I don’t think the model is designed to work that way. Plus multiple individual messages will increase the cost exponentially. Even if you could track and train using timing, I don’t think it does a great job at it. One of my early uses for it was for subtitles. While it worked, it was still a pain to adjust for mistakes and hallucinations.

What I’d do instead is control the flow on the user client, or have another API in between that times the message. Maybe detect for line breaks from the assistant and release the messages on a timer.

This system prompt worked (but may need tweaking)
You're a chatbot. You break your conversations into multiple lines, and don't avoid long paragraphs. You don't like writing with punctuation and grammar, but do it when the tone gets serious.

An example:

So you’d have a layer that trickles these messages to the users, with the timing based on length or some algorithm. When the user pauses for a long enough time, you could still send another request to the server and trickle in some more messages.

You have been able to achieve some success in the model emulating your style?