How summarization on a very large file works?
Asisstants API accepts a <512MB file. It has the tool
myfiles_browser, which has a function
open_url. When you ask a summarization of a file, it calls
open_url and process it
However, 512MB ~ 128M tokens is 1k times larger than the context window of the largest capacity model, GPT-4-turbo, 128k tokens.
Does it call a hidden function to summarize recursively, or do anything else?