Assistant Citations/Annotations array is always empty

Created an Assistant using the API and have uploaded documents for retrieval. Looking at the request/response pairs in Playground, retreival works well (NICE!) but the annotations are always empty. The docs say that you will get this strange citation note in the text response, which I do, but there are no anotations that correspond.

Screen Cap 2023-11-07 at 04.02.39@2x

I know it’s finding the correct answer in my documents, because I’m asking questions that it could only be finding the answer there.

9 Likes

I’ve notice that too, very strange since I tested the same assistant that I am using via the API in the Playground and it gives me annotations…

Indeed, noticing the same on my side. Also in the run step I see the retrieval field is blank which seems strange (see example below). But just as you say the response is specific enough that I think the retrieval must have succeeded.

        "tool_calls": [
          {
            "id": "call_xyz",
            "type": "retrieval",
            "retrieval": {}
          }
        ]

I’m glad you mentioned this @zachary.schillaci because I was expecting to get some feedback on the sources it found in the retrieval step also.

BTW, the Assistant that was running last night finding specific information does not find the same exact query now (retrieval, annotations are all still blank). I did not change anything, but something has clearly changed on the back-end. I don’t get any errors. I just don’t get the correct response back from the LLM.

1 Like

I have just struggled for the past 2 hours with this. Ends up the AI actually may use annotations or not haha. If you force a interation such as a prompt like “what is this file about?” it has a higher chance of prompting the annotations so you can test that

I reported these bugs with a link to this thread. Hope someone on the Assistants team gets it. :slight_smile:

1 Like

Same here, I’ve been scratching my head for a good amount of time. It always return an empty array. I can even ask it to give the page number in the value but annotations always returns empty. Anyone found a solution?

1 Like

I’m also encountering this issue. Hopefully it gets fixed soon as otherwise the retrieval mechanism seems very good.

Are we allowed to tag someone from openai on this? trying @logankilpatrick

I’m also experiencing this problem, hope OpenAI can address this issue soon.

Same here - when hovering other citation nothing happens…

image

But these other citations actually work when hovering.

Not sure what the difference are between the two kinds…

I find it rather strange that openai do not address this. Okay it is a beta but if you put documentation on the website showing how to get annotations, and it is just not actually working it should be either removed from website or fixed… I guess openai has alot of stuff on their plate atm…

In my case, it seems that assistants with only one uploaded file returns the annotation array correctly, while assistants with multiple files returns empty annotation array.

I am experiencing this same problem as well, and I have uploaded only one file.

Me too, as of 11/26. I really need the passages that are referred to. I wonder if it breaks if there is more than a couple. I’m getting 17+ references mentioned.

[ThreadMessage(id=‘msg_…’, assistant_id=‘asst_…’, content=[MessageContentText(text=Text(annotations=[ ], value=‘In the document regarding ABC, a total of 30 trials were included which investigated a total of 1,850 subjects【17†source】.’), type=‘text’)], created_at=1701, file_ids=[ ], metadata={}, object=‘thread.message’, role=‘assistant’, run_id=‘run_…’, thread_id=‘thread_…’),
ThreadMessage(id=‘msg_…’, assistant_id=None, content=[MessageContentText(text=Text(annotations=[ ], value=‘how many subjects were there ABC’), type=‘text’)], created_at=1701, file_ids=[ ], metadata={}, object=‘thread.message’, role=‘user’, run_id=None, thread_id=‘thread_…’)]

I’ve removed the timestamps and ids for security purposes.

I’m also seeing the annotations referenced but an empty annotations array.

e.g.【1†source】

Seems that the team to release fast hasn’t completed implementing the annotations in the call. This is actually common to build the skeleton but not implement it 100%. I’m also getting the source directly in the message and the annotations empty. We shouldn’t get the source cited directly in the message, it should always be an independent parameter of the response.