Assistant API always return empty annotations

Response: 【11†source】
Response Annotations: Text(annotations=

There are sources but they are not available.


I am experiencing the same thing as of this morning. Yesterday, this definitely was not the case as I was specifically implementing a system to inline those references as inline footnotes and Tooltips.

Update: this is now working again.

As of this moment, not working for me:

Response: 【13†source】


I’m also encountering the same issue. Hopefully, it’s fixed soon because otherwise the Assistants API retrieval seems better than most I’ve tried.

1 Like

Experiencing the same, sometimes the array contains the annotations, sometimes the array is empty. Although it always has annotations in the text.

I am also facing the same issue. Every time the annotations are empty.

I’m having the same issue. I was delighted with the new retrieval feature. It was working yesterday.

The annotation object is consistently empty for me as well. Only started working with it today so I don’t know if / when this behavior began.

Still no official answer to this topic? I am also interested in the topic

openai just imploded, I don´t expect this to be fixed anytime soon. Look to langchain and other solutions.

I’ve found an alternative solution to this issue. We can incorporate this into the prompt, enabling the assistant to respond with a source.

Reply in this format { “answer”: answer, “source”: source}


In my case, it seems that assistant object with ONLY ONE uploaded file returns the annotation array correctly, while those with multiple files returns empty annotation array.

I am still experiencing this problem today. I have uploaded only one file.

I am experiencing this as well. But in my case I have a function_call making a search, which contains the source with url and everything. The annotations are still empty.

Are you using the assistant API to make the function call? Would be super nice to see a code sample. When using langchain I can make a function call that will give me the annotations. Can you make this work with a function call to the assistant?

Still occurring for me (11/30) - both in the playground and via the API. 90% of the time annotations is empty. Occasionally it will contain a file citation annotation but I cannot see any pattern in when it does vs doesn’t work…

"content": [
    "type": "text",
    "text": {
      "value": "Yes, your plan includes coverage. Plan covers many different types of vaccinations, including those for the flu, shingles, measles, and more【15†source】. Additionally provides coverage for routine eye exams【16†source】.",
      "annotations": []
1 Like

I am still seeing this issue (12/4) - the playground and the API for me return empty annotations. It always contains a citation but the annotation field is always empty.

Just adding on that I see its citation, but the annotations field is empty. For example:

MessageContentText(text=Text(annotations=[], value='The dog food produced by ACME Dog Food Corporation exclusively contains naturally sourced ingredients, including meats, vegetables, grains, and fruits. Importantly, they strictly prohibit the use of artificial colors, flavors, or preservatives in any of their products【13†source】.'), type='text')

Same error, there’s something we need to configure to get the annotations?


I’m also seeing this error. Responses will use the 【13†source】 text, indicating that a document has been cited, but the annotations array remains empty. Easy RAG is a major reason we’re using the Assistants API, but without this the RAG features are far less useful for our use case. We’ll probably need to move away from the Assistants API until this and response streaming are functional. We need to be able to reference the source documents in our answers.

Beyond this Python code sample, from which we are left to infer the contents of the annotations array, there is also no example in the API docs of what an annotations array actually looks like.