555 tokens of # Tools for the new text of file_search.
387 before.
Neither of which compare to the amount of injection done automatically by v1 before it searches and browses, or the amount of non-threshold results you get from a v2 search.
v2. minimum bot, minimum input: 640 tokens placed. 1500 tokens of bike text not searched upon or automatically injected (25k+ of vector storage could make that a 25k input question).
I might add that vector search can be problematic for the application shown. That tool text includes: “Tool for browsing the files uploaded by the user.”, and “Parts of the documents uploaded by users will be automatically included in the conversation.”
You don’t have the ability to change that tool to “company information placed by the AI developers to help you perform your task”.
Also, note the massive amplification of token usage on just turn 2, where I could have placed all the off-topic text for 1600 by real injection RAG:

