I am calling the responses api with this
response = client.responses.parse(
model=model,
#reasoning_effort=“high”,
tools=[{
“type”: “file_search”,
“vector_store_ids”: [vectorstore]
}],
input= prompt,
instructions= instructions,
text_format=Encounter
)
When I do this with gpt-4.1-mini I get a number of response.output that have a content
When I do this with gpt-o4-mini I get only a set of ResponseFileSearchToolCall and nothing with any content at all.
Prior to putting in the file tool the gpt-o4-mini was giving me by far the better results for my task albeit with about 20% of made up date. The file search was to eliminate that 20% of fictional results - but instead it has seemingly locked o4-mini into some weird corner where it never actually gives an answer.
This has been consistent across multiple test calls
gpt-4.1-mini has consistently given content. Sadly the content is still not that great - the reasoning models seem to be far better suited to my use case that involved summing values.
By trial and error I have found that o3-mini seems to work with this combination of structured output and tool. I still think o4 not working is a straightforward bug