Problem of Retrieving Information from File Search, Playground Vs. API Usage

I have an assistant that uses “File Search.” In my file, there is a lot of information. Previously, this information could be easily retrieved by my assistant. Actually, it can still be easily retrieved when I use the playground. However, when I try to use the same assistant via the API, the responses change to “I don’t have any information about this” and “I can’t retrieve any information now.”

This doesn’t always happen; it’s only sometimes. But I can clearly see the difference between using the playground and the API. Does anyone have any ideas about this?


I had a similar issue and the problem was that when I was uploading the file via API hit, the file was not being created properly. So, what is the format of your files? Are you able to retrieve information from the same file in playground? Are you using Assistant V1 or V2?

I did not upload using API hit. I directly uploaded on the website. The file is .doc. But the important thing is, it works perfectly on playground. It was assistant v1

Try one thing, in Playground, ask your assistant to give you first line and the last line of your document. See if it gives you the correct information.

1 Like

I have the same problem. In playground, it works perfectly it retrieves any information avaliable on the file but in the api, it doesnt even look at the file.

I have the same problem !.. but I think it could be related to the new version and SDK, I’m trying to move in different versions or looking for use the last version

Ok we was using the python version openai==1.2.3 so, this version call to v1 assistant version, recently it’s started to fail with retrievals. I updated to the last version to use v2 assistant version openai==1.23.2 now it’s working :slight_smile:


I’ve upgraded to openai==1.23.2. But that was not enough. I’ve also created a new assistant and uploaded my files again. It works better now. But still I’m having the same issue.

Although I have updated to openai 1.23.2, I am not getting any useful results for file search via the API v2 or the Playground. I have already created countless vector stores and assistants via API and GUI, but no information from the documents is used, even if “Running retrieval” is displayed in the Playground. After an explicit request “search for x in file y” the info is found. Even changing the temperature or top_p does not improve this.

I would like to use v2, but currently I only get usable results with v1.

It would be nice to know if more people are experiencing this or if you have had more positive experiences. If so, does it work via Playground and API v2? And which model do you use?

Upgrading the nodejs library to the latest version (openai": “^4.38.2”) fixed the issue for me.

1 Like

In my case I only get good results when I enforce file_search in every run. My vector storage is only 4 MB (txt files) but every run comes with approx. 16k context tokens. So a thread with 20 turns using gpt-4-turbo sums up to approx. $3.50. How should that work in case of huge vector stores? Enforcing the file_search per run also leads to the effect that the functions of the assistant are never called.
Would be happy to learn from your experiences.

@arakonaut You’re not alone, the pricing of Assistant don’t make sense to me, especially for file_search. Too Costly.

I’m noticing another issue with Retrieval.
If you have a prompt that is supposed to trigger retrieval (e.g., use our previous conversation to remind me XYZ), it does work, it looks at, I don’t know, older data I have put in there; so I wonder if there is a cache issue; but it doesn’t automatically and correctly retrieve the information from “my last session” while it’s asked in my prompts.
However when conversing, as a user in interaction, if I explicitly say it’s not correct and ask to retrieve information, Retrieval is activated and the right information is retrieved.
So it doesn’t work automatically through “instructions”, but it work if I prompt it in during the interaction.
Did you observe similar things?

1 Like

I have exactly the same problem.

Also, I have to explicitly mention to search the document to see if I need to pull anything from the document attached, and sometimes the assistant fails to call functions.

1 Like

I’m working on a new interface, with use of APIs and coding abilities, so it’s good to see these issues, hoping we can solve them with precise coding and knowledge about the issues.

I wonder why “prompts” don’t trigger retrieval the right way, because they do trigger (it seems - maybe not?) retrieval, but when I look at results, it’s a data/information that is older or not the one I want… so maybe I need to check whether this is true retrieval from my previous data/stored data, or if this is just the LLM hallucinating which I don’t think because it provides me with information that it seems I created/stored in the past.

solved by explicitly putting “retrieval” in prompt instructions

It is a bit unclear to me how the file_search is adequatly triggered.
(assistant, thread, message, run?)

let’s say I want to create an Assistant with file_search tool to search content in a file.

Step 1. creating a Vector Store (alternatively the vector store can be added in a later step via the Modify Assistant call).
Step 2. creating the assistant:

  "name": "<name>",
  "instructions": "<instructions>",
  "tools": [
        "type": "file_search"
  "tool_resources": {"file_search": {"vector_store_ids": ["<VectorStoreID>"]}},
  "model": "<model>",
  "metadata": {},
  "top_p": 1.0,
  "temperature": 1.0,
	"response_format": "auto"	

Now the assistant has the vector store assigned, and file search enabled.

Step 3. upload the relevant file
Step 4. attach file to the relevant vector store

So now want to ask a question about the file(s) in the assistants’ vector store, without adding any new context via message or thread (so now files to be added there and no vector stores to be create there as well);

It becomes a bit unclear what is the plain vanilla approach from there;
When I create a simple thread, message and run for the assistant, without pointing to the tools, the response message is that there is no file, or I get a message that the assistant is experiencing a technical is*sue. *
(vector store status is completed, and the file is in the vectorStore and the vectorStore is connected to the assistant with file_search enabled, when i do all checkups)

Now when I for example look at the playground it becomes a bit fuzzy.

On the one hand the GUI in the Playgound (see image below) appears to imply that the assisants’ file_search tools need to be activated for the Run or Create message call.

Scherm­afbeelding 2024-04-26 om 17.47.18

On the other hand the Create Run documentation states that adding “tools” to a run helps to override an assistants default settings:

see Tools in a run

So in a plain vanilla situation (one file in an assisants’ vector store) how do you get the assistant to respond to a message, taking into account the information in its vectorstore?

If the assistant has file search enabled, then the tool specification for it to use is injected into the system message (aka instructions).

The AI does not know what it will find, and you cannot alter the specification like you can when you use your own function for a database. So to make the most of the file search, and to avoid paying for unnecessary calls, you would make a clear statement in the instructions about the contents. Example:

“You have a tool file_search, which allows you to search document files with xyz company’s product listings and troubleshooting knowledge base. You have not been pretrained to answer xyz company questions; you must use the search function to satisfy user questions about xyz company”

Previous problems mentioned issues in continuing conversations: the AI may see the previous tool calls to myfiles_browser and its return in its conversation history, and be confused why that tool spec is not there or how to again get knowledge. The system message can make the operations clearer.


I’m using playground v2, I want to “ensure” that RAG is activated systematically from my prompt instructions, because it’s not the case at all. Even using systematically “retrieve… in your knowledge file” makes RAG being activated randomly.

Any experience or suggestion?

I am getting similar where it work in Playground, but via API it does not. However, i am seeing this coming back as a message.

has file_search been removed?

when using this

    tools: [
            "type": "file_search"

i get a response of

error: {
message: “Invalid value: ‘file_search’. Supported values are: ‘code_interpreter’, ‘function’, and ‘retrieval’.”,

has the API changed?