Is this the right use case for assistant?

I want to build an agent that can take in and process a set of documents (mostly .pdf and .docx) from the user and answer a single question based on these documents. I also want the agent to do function calling based on the output of its answers. For injecting documents (aside from RAG and third party vector stores) I came across Assistants. It seems to serve the purpose well but I’m a little confused about the concept of managed vector spaces. If I use file_search assistant, does it mean that in my use case, for every single user session I’ll end up with a vector store? If yes, does this sound efficient? Is there a more efficient way to do this?

We are doing something similar to what you describe, with code written to theV1 API (no Vector stores). We attach our “reference files” to the assistant and in our .threads.create() call the file we want to summarize is an “attachments”. OpenAI automatically creates a vector store around the file, and then we delete it all, including the thread, at the end of our user’s session. Call shown below.

thread = client.beta.threads.create( messages=
        [   {   "role": "user",
                "content": ThreadMess,
                "attachments": [        # Attach file to the message.
                    { "file_id": oAI_file.id, "tools": [{"type": "file_search"}] }
                ],
            }
        ]
1 Like

Yes and No. You can pass your Vector Store IDs either one time in the Assistant or while creating Thread. If the files for which the session is starting is same all the time, pass it via Assistant and if the file changes for each session, pass it via thread.

1 Like