I had come to the same conclusions, but my purpose was to let token consumption be kept at a minimum.
I agree if you want to maximize both performance and cost, then presenting a summary of each chapter, rather than just sub-chapter section titles, each time some new content is generated, is a good idea. That is, aside from the [maybe already solved?] issue that the longer your context is the more the AI tends to forget parts of it.
Again it matters a lot what kind of book you’re writing. If it’s a novel, rather than non-fiction, I think that changes things a bit. Novels probably need longer context for any query, because those details can only come from the book itself and not from knowledge the LLM already contains inherently.