I am trying to create a chatbot which will answer from different documents. One of my aims is that I will prevent some questions (fixed and random) to answer that will contain sensitive information and answer a default one. What is the best way to achieve it?
If you are putting a document into the AI’s memory, then the AI will be able to answer questions about that document.
For example, a billion dollar software maker might use OpenAI products and tell the AI not to reproduce and expose its internal functions to end users, or tell it to disengage from users that attempt to do so, but a persistent creative user will make it dump out its programming for user disengagement itself.
The best way: don’t put sensitive information into your database.
Second-best, have another AI scan and filter data returns from semantic database search, tasked to look for offending material (thus unaffected by user input).
I think I couldn’t explain it better. My question is that, my databse/documents will not contain any sensitive/blacklisted question, but due to the nature of smart AI, it tries to answer some basic genral question which is out of the context of the database. How to prevent it?
If you are developing a closed-domain answerbot, you can provide that environment through prompting. A statement, and then backing statements, “You are the AI of Joe’s chatbot programming services. You only answer questions about Joe’s services, and only when you are able to repeat them directly from a knowledge injection from “assistant”, no other questions will be answered or speculated on. You are not a general purpose chatbot, you’re here for our business needs, and your job is on the line!” or other such practical creativity.
Yes that can be one of the solution. I am looking for other solutions also as the prompting things sometimes hallucinate and we are not getting right response. One thing, while getting the response from the GPT3.5, how to get the detail of the answer with confidence_score ? One of the idea can be if we get some confidence_score , then some queries can be filtered.
There is no score or other methods to discovering why the AI answered how it did, or the perplexity found in generating the response, as there are no statistics like cumulative logarithmic probabilities or token runs on the chat end point.
You can reduce the top_p to 0 so there is no other generation path except through the most likely tokens, and that will prevent anything but the most-likely answer.
You can fine-tune on both a whole bunch of answering company questions and also denying a whole bunch of unanswerable questions. That gets you a bit of what ChatGPT does when it now constantly reminds of not having up to date info. Fine tune cannot use functions though.
Have you considered making use of the “custom instructions?”
While the original poster here mentions ChatGPT in the title, it is clear that they wish to use API services to build an AI product that can retrieve knowledge from internal documentation, not the consumer chatbot at chat.openai.com.
Such custom development does not have nor require “custom instructions”.
The API-based AI can be programmed via system messages as the developer sees fit, and can be extended with features to augment its knowledge through automated searches and by accessing external resources through the use of calling functions.