Given a chatbot based on gpt-3.5-turbo in retrieval-augmented generation setting, where the model is asked to answer questions based on a provided context, what are some ways to stop it from answering questions or otherwise fulfilling requests that are not related to the context?
For example, suppose that this bot was deployed on a website in the medical domain. It should not be possible to ask it to create recommendations for how to write a good history essay, for example.
I’ve tried including in the prompt an instruction that the model should only answer questions that are related to the context and while this seems to work most of the time in English, it no longer does so in other languages I’ve experimented with. Another idea I’ve played around with is including an additional step where the model is asked to determine whether the question is answerable - this works somewhat better, but generates too many false negatives.
Adding the following to your prompt either at the top where you might be setting system/role for LLM; or towards the end along with the question may help.
“You can only make conversations based on the provided context. If a response cannot be formed strictly using the context, politely say you don’t have knowledge about that topic.”
I have the same problem, i found very difficult to force the model to only use the context. Here is my prompt
“”"
Please read the context provided below:
CONTEXT
{context_str}
Based solely on the information given in the context above, answer the following question. If the information isn’t available in the context to formulate an answer, simply reply with ‘NO_ANSWER’. Please do not provide additional explanations or information.
Question: {query_str}“”"
So far I found that:
1- gpt-3.5-turbo is very hard to only stay with the context
2- gpt-4 works well but is expensive
3- Google Vertex AI text-bison seems to work very well and the price is like gpt-3.5-turbo, the problem is that the responses seems to be shorter not so gentle.