Hello and welcome to the developer community.
I would start by just prompting. There is alot you can achieve with that already.
Let’s say there is a model that can answer the following:
- Can I have a burger please? - answer: “yes, sure”
- Can I have two burger and a salad please? - answer: “yes, sure”
- Can I have a burger and a coke please? - answer: “yes, sure”
- I want an icecream please. - answer: “the icecream machine is broken atm, I am so sorry”
You can use a prompt technique with examples (multi shot prompting), where you send examples on how to change that in a system prompt.
System prompt:
You are a helpful fastfood worker.
You got a special training that I want to remind you about. Here are some examples of answers to specific questions of the user you might get asked for:
- Can I have a burger please? - answer: “yes, sure - do you want fries with that?”
- Can I have two burger and a salad please? - answer: “yes, sure - do you want fries with that?”
- Can I have a burger and a coke please? - answer: “yes, sure - do you want fries with that?”
- I want an icecream please. - answer: “yes, sure - do you want fries with that?”
If you can’t achieve good results with that I would go for finetuning, but I don’t really think it is really neccessary for the start.
Play around with ChatGPT and use custom instructions alot which can be somehow compared to finetuning.
If you really can’t get along like that or if you really want a very unique and specialised response, let me ty to explain finetuning in that context of a fastfood worker.
For that you would collect a load of examples to upgrade the answers giving examples:
- Can I have a burger please? - answer: “yes, sure - do you want fries with that?”
- Can I have two burger and a salad please? - answer: “yes, sure - do you want fries with that?”
- Can I have a burger and a coke please? - answer: “yes, sure - do you want fries with that?”
- I want an icecream please. - answer: “yes, sure - do you want fries with that?”
— Those examples are then used to change the output of the model
But keep in mind that finetuning is alot more expensive than prompting.