I’m currently working with two LangChain agents (Pandas agents) to retrieve information from large tabular datasets. I originally had both datasets (Iris and Titanic) in a single agent, but separating them into two agents has improved my inference accuracy.
Currently, these agents lack memory functionality, and the latest version of LangChain doesn’t support memory through kwargs. I’m curious if it’s possible to create a conversational chatbot using OpenAI’s chat completion feature, while also integrating these two LangChain agents as tools.
My goal is to enable the chatbot to have memory, allowing it to handle a limited number of interactions (e.g., 5) and switch between the Titanic and Iris agents as needed to provide accurate responses based on previous questions and context.
I’ve attempted to follow various guides but haven’t been successful. If anyone could provide insights or direct me to resources that could help achieve this integration, I would greatly appreciate it!
Will this be possible (Im getting an error right now but I’ll be happy if you guys could help or guide me in the right direction)
# Required imports
import pandas as pd
from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from openai import OpenAI
from tenacity import retry, wait_random_exponential, stop_after_attempt
# Load datasets
df_titanic = pd.read_csv("https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv")
df_iris = pd.read_csv("https://gist.githubusercontent.com/curran/a08a1080b88344b0c8a7/raw/0e7a9b0a5d22642a06d3d5b9bcbad9890c8ee534/iris.csv")
# Initialize the OpenAI LLM
llm = ChatOpenAI(model="gpt-4")
# Create LangChain agents for both datasets
titanic_agent = create_pandas_dataframe_agent(llm, df_titanic, verbose=True)
iris_agent = create_pandas_dataframe_agent(llm, df_iris, verbose=True)
# Define the OpenAI client
client = OpenAI()
# Define tools for OpenAI
tools = [
{
"type": "function",
"function": {
"name": "titanic_agent",
"description": "Interacts with the Titanic dataset.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The question or query related to the Titanic dataset."
}
},
"required": ["query"]
},
}
},
{
"type": "function",
"function": {
"name": "iris_agent",
"description": "Interacts with the Iris dataset.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The question or query related to the Iris dataset."
}
},
"required": ["query"]
},
}
},
]
# Utility function for making requests to the Chat Completion API
@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, tools=None):
response = client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=tools
)
return response
# Main conversation loop
messages = []
messages.append({"role": "system", "content": "You are a chatbot that can answer questions using the Titanic and Iris datasets."})
while True:
user_input = input("User: ")
messages.append({"role": "user", "content": user_input})
# Get the response from OpenAI
chat_response = chat_completion_request(messages, tools=tools)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
# Handle function call
if assistant_message.get("function_call"):
function_name = assistant_message.function_call.name
function_args = assistant_message.function_call.arguments
if function_name == "titanic_agent":
result = titanic_agent.run(function_args['query'])
elif function_name == "iris_agent":
result = iris_agent.run(function_args['query'])
# Append the function result to the messages
messages.append({"role": "function", "content": result})
print(f"Assistant: {result}")
else:
print(f"Assistant: {assistant_message['content']}")