How to solve the input token limit for llm models

I have a document of 1000 pages. I would like to do some annotations or entity extraction methods for the whole document. But o1 model supports 200000 tokens. How can i solve this issue of token limit without compromising the contents?

1 Like