Fine tuning the model to have it ask questions back to the user?

My model is mostly fine tuned and works fairly well. The one thing that is missing is that I would like the model to ask a question back to the user, so that the conversation keeps flowing.

Example:

Human: Thanks for meeting with me today.
Model: Absolutely, glad to be here. How are you doing?
Human: I’m doing well. So what are you working on?
Model: I’m working on a project for Kodak. How about you?

Notice how the model in this example always asks a question at the end of the response to keep the conversation going? Any way to fine-tune it that way?

1 Like

Hi @ryan8

You can easily do this by simply appending questions to the output of an API completion response.

You could use the text from the prompt, completion, or both to create a question and then append it as mentioned.

Consider the model a component of your desired architecture and add other components to get the results and behavior you desire.

You can also consider another GPT model to generate questions if that approach pleases you.

HTH

Hi @ruby_coder . We did do that. Almost all of the completions in the fine-tune json have a question at the end, but the model will still return a non-question answer about half of the time. It seems to pick and choose parts of multiple completions in its response and sometimes the parts it picks does not have a question.

For example:

Prompt: Thanks for meeting with me today.
Completion: Absolutely, glad to be here. How are you doing?

Prompt: I’m doing well. So what are you working on?
Completion: I’m working on a project for Kodak. How about you?

Then the model will pick, out of these 2 completions: “Absolutely. Glad to be here. I’m working on a project.”

Combination of multiple completions but not the part with the question.