Today, you can attach at most one vector store to an assistant and at most one vector store to a thread.
Does this mean that the assistant can use two vector databases simultaneously, one for a thread and one for itself?
It sounds like it.
Edit: Yes. I just tested it.
I created one vector store with the Llama 3 model card and another one with the Mixtral model card. I attached one to an assistant and another to a thread then requested the assistant compare the two models based on the information present in their respective model cards.
It appears to have worked as intended.
@anon22939549 Can I create my own vector store and apply assistant on the top of that vector?
Due to some policy issues, I can’t share my files to openAi, Any solution for it? Can I create some db and use assistant over there?
You can create a local semantic vector database, and have the AI call upon it with tools, or automatically inject some more documentation from automatic search along with the latest user question.
For the AI to answer about your files, some content of your files must be sent to OpenAI.
I see that this post was from back in April so I was wondering whether or not the information provided is still accurate today. If so, that’s disappointing but most importantly, I seem to have accomplished having multiple assistants tied to one vector store. But now I can remember how it was done so that I can replicate it. Any feedback?