Yes, it sounds great. I’ve got 8K now with GPT4 and looking forward to 32K. But then, realistically, if the 90% of the query responses I am looking for can be found in 1 or 2 paragraphs, is it really helpful to feed the LLM 50 pages of text for each query? And, isn’t that doing to get prohibitively expensive? I mean, a million token context window will be great for summarizing a book, but how great will it be for finding the paragraph where Huckleberry Finn and Tom Sawyer first encounter Jim in Mark Twain’s “Huckleberry Finn”.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Problems with long contexts - gpt that solves law cases | 17 | 768 | March 28, 2025 | |
Is there any way by which I can let GPT-4 API summarize large PDF texts? | 10 | 11770 | May 6, 2024 | |
Multi document comparision and Q/A | 10 | 14612 | June 5, 2024 | |
Can't get a model to follow a specific length / word count | 25 | 1509 | December 19, 2024 | |
PDF summarizer using openai | 22 | 17463 | January 2, 2024 |