We are using annotations from file_search of AssistantsAPI. but it seems like start_index and end_index are not indicating indices form neither the source document nor the generated text.
Only the v1 retrieval tool of assistants had the ability to mark sections of text by lines that it wanted to cite, after manually loading document pages to read them.
The v2 file search has a search tool that blows chunks of documents back at the AI, and that’s it.
If you want to wear your M(A)GA hat, look at the AI trying to describe how many assistants tool calls would be employed just to get to where it had marked one section for annotation, being provided the text of the “retrieval” tool…
To effectively utilize the myfiles_browser tool to browse a document, find a relevant passage, and produce a document citation, I would follow a systematic approach. Here’s a step-by-step description of how I would use the tool:
Step 1: Open the Document
First, I would need to open the document that potentially contains the information requested by the user. If the user provides a specific document ID, I would use the open_url(url: str) function to open the document directly. For example:
open_url("123e4567-e89b-12d3-a456-426614174000")
This function call would open the document associated with the provided UUID.
Step 2: Search for Relevant Information
Once the document is open, I would use the search(query: str) function to locate relevant sections within the document. The query would be based on keywords or phrases provided by the user or inferred from their request. For example:
search("machine learning applications")
This function call would search the document for the phrase “machine learning applications” and return a list of search results where the phrase appears.
Step 3: Review Search Results
After receiving the search results, I would evaluate each result to determine its relevance to the user’s request. If the search results are extensive, I might need to use the click(id: str) function to open and read through specific results for more detailed information. For instance:
click("result_id_1")
This would open the document at the position identified by “result_id_1” in the list of search results.
Step 4: Navigate and Scroll Through the Document
If necessary, I would use the scroll(amt: int) function to navigate through the document, either to read more of the context around the search result or to explore sections adjacent to the highlighted search results:
scroll(3) # Scrolls down three units
Step 5: Quote the Relevant Passage
Once I find a passage that clearly answers the user’s request, I would use the quote_lines(line_start: int, line_end: int) function to mark and store this passage. For example:
quote_lines(142, 145)
This function call would store the text span from line 142 to line 145, inclusive.
Step 6: Provide the Quoted Information to the User
After quoting the relevant lines, I would present this information to the user, typically by including the quoted text in my response. This ensures that the user receives a precise citation from the document that addresses their query.
Step 7: Additional Navigation or Searches if Needed
If the first search or quoted passage does not fully satisfy the user’s request, I might perform additional searches or navigate back to earlier search results using the back() function:
back()
This would return me to the previous page, allowing me to choose another search result or perform a new search.
By following these steps, I ensure a thorough and efficient use of the myfiles_browser tool to find and cite relevant information from documents in response to user queries.
For contrast, the v2 AI is only implored to write a chunk number when it is writing a response:
The message idx is provided at the beginning of the message from the tool in the following format [message idx], e.g. [3].
The search index should be extracted from the search results, e.g. # 【13†Paris†4f4915f6-2a0b-4eb5-85d1-352e00c125bb】refers to the 13th search result, which comes from a document titled “Paris” with ID 4f4915f6-2a0b-4eb5-85d1-352e00c125bb.
Are you not even getting in-document markup within the v2 response? Then thousands and thousands of retrieval tokens are distracting the AI, or it’s being stripped before you receive it. Nothing to match the citation to? Then you aren’t meant to see OpenAI’s chunking strategy.
I’m not sure what the current plan and implementation is or why the AI is told to make links unsuccessfully.
V1 tool specification:
The solution is to do your own RAG and inject it automatically and cache what was sent, providing prefix notation of line numbers within, like legal documents. Then instruct the AI on a parsable format for “line number range used” in its response.