Expand AI Context beyound local documentation

Hello, I made a model of openai using langchain that will communicate with my documents, however I have one question, I would like for the model to behave as such that if it does not find the question in documents to consult its general knowledge, what do I mean by this?

Lets say I have two models, one is chat model, another uses vector store and reads from files.

I ask basic chat model what is the main city of Germany and it answers Berlin.
I ask model that is based on documentation same question, it checks documentation sees it does not have information then it should act like basic chat model and find the answer using its own set of tools. But all i get is the answer is not in provided context.

This is the code i use

**mods.py**

def parse_pdf(file_path: str) -> List[str]:
with open(file_path, "rb") as f:
    data = f.read()

pdf = PdfReader(BytesIO(data))
output = []


for page in pdf.pages:
    text = page.extract_text()
    text = re.sub(r"(\w+)-\n(\w+)", r"\1\2", text)
    text = re.sub(r"(?<!\n\s)\n(?!\s\n)", " ", text.strip())
    text = re.sub(r"\n\s*\n", "\n\n", text)
    output.append(text)

return output


def text_to_docs(text: List[str]) -> List[str]:
combined_text = ' '.join(text)
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=2000,
    separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""],
    chunk_overlap=100,
)
chunks = text_splitter.split_text(combined_text)
return chunks


 **main.py**

 all_texts = []
    for pdf_file in pdf_files:
        file_path = os.path.join(data_path, pdf_file)
        try:
            text = mods.parse_pdf(file_path)
            all_texts.extend(text)
        except FileNotFoundError:
            st.error(f"File not found: {file_path}")
            return
        except Exception as e:
            st.error(f"Error occurred while reading the PDF: {e}")

  documents = mods.text_to_docs(all_texts)
  embeddings = OpenAIEmbeddings()
  vector_store = FAISS.from_texts(documents, embedding=embeddings)
  llm = ChatOpenAI(temperature=0.3, max_tokens=1000, model_name="gpt-4-1106-preview")
  qa = ConversationalRetrievalChain.from_llm(llm=llm,retriever=vector_store.as_retriever())

  if "messages" not in st.session_state:
        st.session_state.messages = []
  if not st.session_state.messages:
        welcome_message = {"role": "assistant", "content": "Hello, how can i help?"}
        st.session_state.messages.append(welcome_message)

  for message in st.session_state.messages:
        with st.chat_message(message["role"]):
            st.markdown(message["content"])

  if prompt := st.chat_input("What is your question"):
        st.session_state.messages.append({"role": "user", "content": prompt})
        with st.chat_message("user"):
            st.markdown(prompt)

        result = qa({"question": prompt, "chat_history": [(message["role"], message["content"]) for message in st.session_state.messages]})

        with st.chat_message("assistant"):
            full_response = result["answer"]
            st.markdown(full_response)
        st.session_state.messages.append({"role": "assistant", "content": full_response})

Hi! Welcome to the forums!

:thinking:

do you have any context instructions? You can easily tell the model how it should handle user queries with a system message or an injected user message. youā€™re familiar with the playground, I imagine.

Iā€™m not sure how well bifurcations or escalations work in langchain, but it would be fairly straight forward to catch a negative response (just ask for your output to be structured) and then decide what to do afterwards.

Iā€™m not a great fan of langchain because it feels clunky and poorly documented. but if you find it useful, donā€™t feel discouraged!

I am new at this, so this code is all i have, I am not familiar with playground, if you have any suggestions code wise please let me know. Thanks in advance.

Hmm, you picked a really tough first time project.

can you try to find out how to inject additional messages into your prompt, and then experiment with prompts on the playground to get the results you expect?

You can play with language such as ā€œif there is no relevant information attached in the context, feel free to formulate a response using your gut feelingā€

That said, itā€™s also quite possible that langchain ConversationalRetrievalChain does some prompt transformation or rephrasing of its own :thinking:

it all seems very abstract and obfuscated. Itā€™s easy to get started and to show something to your boss, but beyond thatā€¦ :frowning:

if you want to stick with langchain, you might be able to get more help on the langchain reddit, maybe they have a discord - but most of the issues yourā€™re facing right now arenā€™t related to gpt-4 :confused:

The usual case discussed here is that developers donā€™t want the model to answer if no knowledge is retrieved. This is because the probability of a hallucination goes up quite a bit in these cases and you definitely donā€™t want something that just sounds reasonable to be mixed with real answers.

In these cases the prompt is something like ā€œIf no applicable knowledge is retrieved then reply with ā€˜I donā€™t knowā€™ā€.
In your case you can try to adapt this prompt accordingly.
Another way of doing it would be to set a threshold for the semantic similarity. If all results are below the threshold you return to the chat bot with the user question and the context but with another system prompt which simply excludes the external knowledge.

1 Like

The best approach may depend on the proportion of questions you expect to be answered from the knowledge base versus the proportion of questions requiring general knowledge.

  1. If using general knowledge will be relatively rare, you could try adding instructions like ā€œConsult the provided documents for your answer. If they are not helpful, you may base your answer on your own research.ā€

  2. If there will be a fairly even mix between the two kinds of answers, you could try adding instructions like ā€œThe provided documents may be helpful in writing your answer.ā€

Two additional thoughts: 1. I would caution against using instructions like ā€œact like a basic chat model.ā€ Instead, pretend you are instructing students, i.e., ā€œfor your answer, you may use the textbook Iā€™ve provided or you may conduct your own research.ā€ 2. I am curious about your use case where a single application is built to answer questions from a knowledge base and from outside it. Might your users be confused about this and wonder whether some answers are more reliable than others? Without knowing about your use case, my gut reaction is that you may need two applications, not one.