Langchain RAG with conversation memory

Hi, I want to use Pinecone database to limit gpt-3.5 knowledge. I found a good example here, which I want to use.

(https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb)

Help me to finish the code from example with RetrievalQAWithSourcesChain, so that chatbot can memorize the conversation.

Welcome!

Are you trying to find ChatGPT?

chat.openai.com

GPT4 do not provide working solution.

I found a solution to my problem.

from langchain.memory import ConversationBufferMemory  

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key="question", output_key="answer")

qa_with_sources = RetrievalQAWithSourcesChain.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever(),
    memory=memory
)
query = "Who was Benito Mussolini?"
res = qa_with_sources.invoke(query)
print(res)

# To continue the conversation, use the same qa_with_sources instance
query2 = "At what time he was in power?"
res2 = qa_with_sources.invoke(query2)
print(res2)

2 Likes