I’ve developed a new assistant by training it with document vectors from PgVectoRS using Llama Index. Initially, when I tested the assistant on Jupyter Notebook, it performed well, accurately answering questions and providing sources. However, when I integrated the assistant into a Telegram bot using its ID, its performance declined. Instead of providing accurate answers with sources, it started generating nonsensical responses without any references.
I suspect the issue might stem from how I connected to the assistant’s API, particularly in handling the requires_action
flag to retrieve tool_outputs
. In the code snippet I provided, I appended function arguments to the output key. However, I’m not sure if this is the correct approach, as it seems that I may need to do something different to properly retrieve and pass the tool outputs to the assistant. Here’s the relevant code snippet for reference:
tool_outputs = []
while run.status != "completed":
run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
if run.status == "requires_action":
for tool in run.required.action.submit_tool_outputs.tool_calls:
tool_ouputs.append({"tool_call_id": tool.id, "output": tool.function.arguments})
Could you please advise on how to properly handle the requires_action
flag and retrieve the tool_outputs
to ensure that the assistant functions correctly within the Telegram bot? If you need any additional information, please let me know. Thank you!