Sliding Window/Convolution over Large text input

For example, given a large python file or textbook chapter, use a sliding window over the file where each block size gets inputted into the model API until EOF and then you concatenate all outputs. Has anyone tried this?

2 Likes

Yes. It’s not bad. Another approach is to use an extractive summarizer for the transection, then use GPT-3 to write abstracts for each. I also think it should be possible to keep a running summary of the document as a whole (call it S) and update S every time you read a chunk. This is basically what the human brain does.