I have a document of 1000 pages. I would like to do some annotations or entity extraction methods for the whole document. But o1 model supports 200000 tokens. How can i solve this issue of token limit without compromising the contents?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Limit of output tokens in API for web search AI models | 1 | 901 | June 24, 2025 | |
| Facing challenges due to token limitations | 2 | 129 | January 29, 2025 | |
| Handling text larger than limits? | 2 | 3649 | December 17, 2023 | |
| Accuracy and token limitation | 0 | 133 | November 22, 2024 | |
| Is there a way to exceed the token limit and stay in context? | 1 | 530 | December 20, 2023 |