How to ringfence a chat bot

I’ve got several clients interested in the technology to provide chatbots using embedding to access their knowledge bases

One thing that came up in the conversation was how do you stop users from using the chatbot for other purposes.

Say for example, you had a bot that can answer questions on a SAAS product, how can I stop a user from asking the same bot to write a blog post etc. Because the bot has extensive knowledge of other things, how do you ringfence it so you don’t pay for tokens on questions unrelated to the business

My initial thought was to use Semantic search to find the knowledge and to them ask and then to ask GPT to answer the users question based on the block of text found in the search

Does anyone have tips that I could implement so it ONLY answered questions from the knowledge - and nothing else. My thoughts are that there may be benefit for allowing GPT to use the knowledge to augment its existing data (EG it may have external references that relate) but I don’t want abuse of the chatbot

Any tips (or even pointers) would be appreciated

2 Likes

I’ve been working on this use case for some time. Definitely combining an embedding-based semantic search + a generation model (Davinci 003) works pretty well. To avoid “Off-topic” questions, I found that adding the following layers to the flow helps:

1 - Add a threshold to the minimum semantic similarity that you accept to consider that a text chunk is relevant for the given query. If no chunk surpasses this similarity for the given question, then classify the question as “Off-topic”. This pre-filter is fast and works pretty well to filter out a lot of non-sense queries.
2 - Ask your chatbot to avoid engaging in “Off-topic” conversations in the prompt itself. Describe what your use case is and then include something such as: “If the question is unrelated to this niche, you should say \”I’m sorry, but this question is off-topic. I cannot give an answer to that\”. You should always refuse to answer off-topic questions”. This formula worked particularly well for me as a second filter.
3 - Collect real data of the users interactions with the chatbot to fine tune a binary classifier for on/off topic questions yourself. You can fine tune an OpenAI model or any other open source one. This layer would be faster and probably more accurate than the previous filters.

Hope that helps! Building chatbots is really great, but it takes a lot of time to tweak them until they manifest the expected behavior 99% of the time :).

5 Likes

These are all great tips. Thanks for the help

2 Likes

Also, lowering the temperature of the response should help the AI avoid “creative” answers.