i’m having issues with the gpt-4o assistant to fetch data from the vector store all of a sudden. the response of the llm: it faces technical difficulties in fetching documents from the vector store, I mention in my prompt. the api flow has not changed. Setting the model of the assistant to gpt-4o mini seems to not lead to a response of the llm that it faced technical difficulties.
Started over with a clean assistant and vector store, still the same issue. Started over with a clean assistant and a direct add of documents, still the same issue.
Really would like to know whether it is me alone that faces this issue. API status page doesn’t show any current issues.
(initially thought I located the issue, but this is not the case)
looking deeper into this issue. can anyone verify that looking in the thread logs where a vector store is to be used in an assistant thread, it should say ‘none’ next to “File search stores”?
In some threads, the asssistant actively refers to document file names (Source: xxx) in the response, which leads me to believe it fetches and uses data from the vestor store. Still, the “file search stores” in the gui says None in such cases.
I have the same issue with API requests to my assistant using the gpt-4o model. When I use the gtp-4o-mini model, all files I loaded into the vector store which is attached to the thread with the tool_ressources are analyzed correctly. When I use the gpt-4o model, every request fails with a plaintext message which tells me that the assistant is not able to access the file.
Last week I was able to perform file searches in the attached vector stores by using the gpt-4o model without any issues.
Thats what I also do. Most of the time I re-use an existing assistant (which has the file_search tool enabled). The assistant iself has no vector store attached. When I then create a new thread I attach the vector store with the corresponding file upload to it.
After this, I start a new run using the file_search tool selected.
My file upload is done very simple by using the upload file chunked function. Then after creation I just throw it into a new vector store.
Thanks for this thread. It’s very helpful. We are experiencing similar issue with gpt-4o, however it doesn’t work as well for us with gpt-4o-mini. We are definitely do not send over API request vector_store_ids - that’s maybe a solution.
Just FYI with gpt-3.5-turbo it’s working perfectly fine.