Created an Assistant using the API and have uploaded documents for retrieval. Looking at the request/response pairs in Playground, retreival works well (NICE!) but the annotations are always empty. The docs say that you will get this strange citation note in the text response, which I do, but there are no anotations that correspond.
I know it’s finding the correct answer in my documents, because I’m asking questions that it could only be finding the answer there.
Indeed, noticing the same on my side. Also in the run step I see the retrieval field is blank which seems strange (see example below). But just as you say the response is specific enough that I think the retrieval must have succeeded.
I’m glad you mentioned this @zachary.schillaci because I was expecting to get some feedback on the sources it found in the retrieval step also.
BTW, the Assistant that was running last night finding specific information does not find the same exact query now (retrieval, annotations are all still blank). I did not change anything, but something has clearly changed on the back-end. I don’t get any errors. I just don’t get the correct response back from the LLM.
I have just struggled for the past 2 hours with this. Ends up the AI actually may use annotations or not haha. If you force a interation such as a prompt like “what is this file about?” it has a higher chance of prompting the annotations so you can test that
Same here, I’ve been scratching my head for a good amount of time. It always return an empty array. I can even ask it to give the page number in the value but annotations always returns empty. Anyone found a solution?
I find it rather strange that openai do not address this. Okay it is a beta but if you put documentation on the website showing how to get annotations, and it is just not actually working it should be either removed from website or fixed… I guess openai has alot of stuff on their plate atm…
In my case, it seems that assistants with only one uploaded file returns the annotation array correctly, while assistants with multiple files returns empty annotation array.
Me too, as of 11/26. I really need the passages that are referred to. I wonder if it breaks if there is more than a couple. I’m getting 17+ references mentioned.
[ThreadMessage(id=‘msg_…’, assistant_id=‘asst_…’, content=[MessageContentText(text=Text(annotations=[ ], value=‘In the document regarding ABC, a total of 30 trials were included which investigated a total of 1,850 subjects【17†source】.’), type=‘text’)], created_at=1701, file_ids=[ ], metadata={}, object=‘thread.message’, role=‘assistant’, run_id=‘run_…’, thread_id=‘thread_…’),
ThreadMessage(id=‘msg_…’, assistant_id=None, content=[MessageContentText(text=Text(annotations=[ ], value=‘how many subjects were there ABC’), type=‘text’)], created_at=1701, file_ids=[ ], metadata={}, object=‘thread.message’, role=‘user’, run_id=None, thread_id=‘thread_…’)]
I’ve removed the timestamps and ids for security purposes.
Seems that the team to release fast hasn’t completed implementing the annotations in the call. This is actually common to build the skeleton but not implement it 100%. I’m also getting the source directly in the message and the annotations empty. We shouldn’t get the source cited directly in the message, it should always be an independent parameter of the response.