@matt_s This is really helpful, and it’s given me a new algo to think through. Thank you so much. I want to try an experiment to enhance the original method I started with from @daveshapautomator and mix in a bit of space for previous context. If I were to build a little on your idea:
- Divide up the text into proper chunks, say
0 . . . n
. - Have a “
context
” variable that you hold the previous summary in. - As you summarize the
n+1 paragraph
, you introduce it as “This is the summary so far: (summary of 0-(n)) and this is the next paragraph (n). Summarize this paragraph
.”
Gotta think about this. It’s less efficient but might produce better results. In some ways rather than having GPT read isolated chunks, it gets a certain amount of previous context for each chunk.
Maybe another way is to just take a human summary or description of the book, and front load this and describe where we are in the book . . . (this is a chunk 30% of the way through the book, which is about "short-human-description", write a summary
.