Troubles combining Retrieval + Function Calling tool via Assistants API

When I try this pipeline, the function call at the bottom has a randomly generated story as its argument. I expect the argument to be the text in the ch1.txt file which I’ve uploaded to the assistant when I create said assistant.

I have included ‘retrieval’ and my function calls in the tools list (passed to assistants.create)

It clearly finds at least one of the functions and uses it through the assistant, it’s just not hooking up with my uploaded file. list(client.files.list())[0].id is definately my ch1.txt file and it uploaded correctly.

#step 1 upload our chapter txt
upload_file = 'ch1.txt'
if upload_file not in [i.filename for i in client.files.list()]:
  chapter_file = client.files.create(
    file=open(upload_file, "rb"),
    purpose='assistants'),
  print('uploaded')
else:
  print('already uploaded')

#step 2 setup tools :retrieval/function calls
tools = [{'type': 'retrieval'}]
generate_chapter_summary = {
    "name": "generate_chapter_summary",
    "description": "A function that takes in a chapter text and generates a summary.",
    "parameters": {
        "type": "object",
        "properties": {
            "chapter_text" : {
                "type": "string",
                "description" : "The text of the chapter to summarize"
            }
        }
    },
    "required": ["chapter_text"]
}

list_character_facts = {
    "name": "list_character_facts",
    "description": "A function that takes in a chapter text and lists character facts in a JSON format.",
    "parameters": {
        "type": "object",
        "properties": {
            "chapter_text" : {
                "type": "string",
                "description" : "The text of the chapter to analyze for character facts"
            }
        }
    },
    "required": ["chapter_text"]
}
tools.append({'type': 'function', 'function': generate_chapter_summary})
tools.append({'type': 'function', 'function': list_character_facts})
print(tools)

chapter_file = list(client.files.list())[0]
chapter_file_id = chapter_file.id

if assistant is None:
  assistant = client.beta.assistants.create (
    name="colab Literature content writer",
    instructions=f"Retrieve the file {chapter_file_id}. And answer questions about the contents of the story chapter.",
    model = "gpt-3.5-turbo-1106",
    tools = tools,
    file_ids=[chapter_file_id]
    )
else:
  pass

#step 4 create nwe thread
thread = client.beta.threads.create()
#step 5 add user message to thread
message = client.beta.threads.messages.create(
    thread_id = thread.id,
    role = 'user',
    content=f'Generate Chapter summary Who is present in this chapter from the file {chapter_file_id}?',
)
#step 6 run assistant to get back response
run = client.beta.threads.runs.create(
    thread_id = thread.id,
    assistant_id = assistant.id,
)
#step 7 get back a response
run = client.beta.threads.runs.retrieve(
    thread_id = thread.id,
    run_id = run.id
)

while run.status not in ["completed", "failed", "requires_action"]:
  run = client.beta.threads.runs.retrieve(
    thread_id = thread.id,
    run_id = run.id
  )
  print(run.status)
  time.sleep(10)

## Step 8 - IF "requires_action" then figure out and call the correct function logic
tools_to_call = run.required_action.submit_tool_outputs.tool_calls
print(len(tools_to_call))
print(tools_to_call)

for i in tools_to_call:
  print(f'{i.function.name}')
  print(f'{i.function.arguments}')
  print()```

This outputs

[RequiredActionFunctionToolCall(id='call_so9L7nnnd68X6HPvMOJTGNIl', function=Function(arguments='{"chapter_text":"Amidst the chaos and destruction, Captain Maria rallied her team to secure the perimeter. Private Rodriguez, Corporal Johnson, and Sergeant Ramirez moved swiftly, taking cover behind the debris. The enemy forces were relentless, but Lieutenant Chen and Major Patel provided air support from the Apache helicopters. As the battle raged on, Doctor Nguyen tended to the wounded in the makeshift field hospital, while Captain Lee coordinated communication between the ground units and headquarters."}', name='generate_chapter_summary'), type='function')]



Which is a completely unrelated story that it generates as the argument for the function call.

I explicitly put the filename (on openai server) in the prompt/instructions.
1 Like

When you open the assistants screen in the backend do you see the specific file as well (this is tricky because at the moment it doesn’t show the file name you used).

1 Like

As a simple solution, I’ve mapped the names of the files in the backend with their original names :hugs:

Indeed, both the file I mean to use as well as the function calls appear in the assistants ui on backend.

I’m not sure I understand. You map the openai file id to the original filename? Then in the message thread you include the name or fileid in the prompt?

I am not sure about the contents of your file - but you might want to tune your prompt in the Assistant (not in the user message) a bit more to specifcally refer to what Chapter means - and where to find it? But you might already have that. We don’t know it the moment if the file IS indeed stored on Openai’s side or not.

I edited my OP code, I use f string to include openai file id in both but it still ignores it. it’s just a small txt file with the literal chapter text.

I did it over again and now it works but only for gpt4-1106 preview. I mention the filename and openai file_id in every place I can

1 Like

Assistants and the preview AI models are friends. They are the only ones that support multi-tool calls, and OpenAI is not forthcoming with the functions and tools used internally by assistant agents to know what is required, or what is actually happening.

(edit: more is found when you stumble across “endpoint compatibility” of all places - that retrieval requires use of -1106 models. Thus assistants must need some of the new training, likely the parallel tool call ability being used to explore documents specifically by function.)

1 Like

I have been facing the same issue. It seems that currently there are not gpt-3.5 models supporting both these tools at the same time. @glector did you manage to find a solution for this?

I have a similar problem with my Assistant, which consists of three tools: two functions and a file search. Now the Assistant only uses one of the functions and never the file search tool after upgrading to the v2 Assistant. In the previous version (v1), each query was handled effectively using one of the tools. Despite using the same instructions and tools, the Assistant now fails to select the appropriate tool for the query.

Yeah, I think you need to use gpt-4, but obviously, at a completely different pricing…

Why? @nikunj