Whenever I do a query, I ask it to do a file search as well as a web search and return citations from both the file and the web search. The model almost always defaults to only doing web search or only doing file search. It doesn’t do both.
Is this a prompting issue on my end, or is this more related to the fact that the OpenAI API endpoint can only do one of those searches? By file search, I mean the vector search that was initially introduced in the assistance API and now I am using that in the responses API.
I have been using the responses API, but it’s only doing one or the other tool call of either file search over my vector stores or doing web search. Can someone please advise or help?
Welcome to the community, @Sanjay_Prasad, and thank you for sharing your question.
I am assuming that you are either using the tool_choice parameter in “auto” mode either by explicitly defining as such, or by not defining at all, in which case tool_choice defaults to “auto” implicitly. In this case, you’re letting the LLM decide whether it should call a function or not, and if yes, which one.
To override this behavior, you can set tool_choice to required, as outlined here in the official docs and here in the API reference. While this may not guarantee that both functions will be called, you can try this and see if forcing tool usage works for you.
Hey thanks for the info Jai , i think this solved it, also explicitly prompting it to do both web-search and file search also solved it , also I think I had made a small bug where it was doing both sometimes, sequentially but i wasn’t rendering the returned JSON properly as teh schema for the webURLs from web search and file urls from file search were different so I fixed that as well.
And yeah setting the tool_choice to required does solve it thanks.