Hi all,
I have a bank FAQ usecase and a finetuned davinci model over this data. My usecase has 3 scenarios which are as follows -
Scenario 1 -
- User enters a question regarding bank FAQs such as ‘I want to open a Savings Account in your bank’
- Bot has been trained/finetuned on such data and replies back with the response completion provided by me.
Result - success
Scenario 2 - - User enters a question regarding bank FAQs such as ‘I want to open a Demat Account in your bank’
- Bot has been trained on similar data and replies back with the response completion provided by me but instead of mentioning ‘Demat account’ in the response, it still mentions ‘Savings account’. Basically, it gives the exact response as scenario 1.
Result - fail
Scenario 3 - - User enters a question regarding bank FAQs such as ‘I am really annoyed by frequent credit card cross selling calls. please stop them’
- Bot has not been trained/finetuned on such data but it still replies back with a completion response from the finetuned dataset. Basically, in this scenario, bot should search GPT-3’s existent dataset and give a response.
Result - fail
So, as understood from the above scenarios, what I want is my prompt-response dataset should be trained on top of the existent GPT-3 model so that my bot can answer properly on all the above 3 scenarios.