Same issue here. The response text shows there is a web search citation (e.g. \ue200cite\ue202turn0search0\ue201), but the ResponseFunctionWebSearch object does not provide the actual citation.
I’m having the same problem, tried a bunch of prompts, response formats and models…if i run all the same paramaters 10 times 1 out of the ten will contain annotations, they will all have a webSearchId.
Same issue as others have discussed. This does not seem unique to the OpenAi api. I had the same problem with the Gemini API. In either case atm, you cannot eat your cake and have it to. I imagine they are using structured-output-functionality under to hood already to produce the annotation array, and so asking for your own structured output on top of it overwrites this structure.
The best solution I have found is to include an annotations array in the schema you define. I don’t really trust the output without a bunch of manual verification as I have seen sources be hallucinated, which really slows development down. The whole point of the annotations array imo is to have an easy way corroborate your output in the first place. You’d think there’d be more crossover between users requiring structured output for agentic applications and those wanting their responses to be ground-truthed.
Can someone at OpenAI fix this please? In the meantime I guess I will try Claude or X or someone else.