I would like to create a chatbot that uses gpt 3.5 turbo where the chatbot guides the conversation with the user instead of the other way around.
The idea is for the user to write their input, the input would then be placed inside a prompt template that would include the instruction on what to ask the user or do next, and this would then be fed into the model.
For example:
- User wrote: “[user input]”. [Next instruction prompt].
- Role play with the user about [topic].
Use Examples
Prompt: User wrote “hello”
LLM: Hey! how are you?
Prompt: User wrote “good”
LLM: “That’s great! What did you do today?”
Prompt: User wrote “I went to college and learned about…”
.
.
.
Prompt: User wrote “something”. Role play with the user about being a cafe barista.
LLM: Let’s do something fun, let’s say I work as a barista at a coffee shop. And you wanted to buy a coffee from me. What would you say?
Having around 8-10 different prompts got me confused as to how much training data I would need for the model to adapt to the different prompts as maybe some would require a small description as an output, and another prompt would ask for the model to evaluate the user.