I hope this message finds you well.
I am currently developing a product designed to generate over 100 pages of documents by leveraging OpenAI and the Perplexity API. However, I am facing challenges due to token limitations. Providing the document’s context in each input consumes a significant number of tokens, which restricts the ability to generate the desired number of pages in a single process.
I was wondering if any solution could help address this limitation. Specifically, I am looking for a way to optimize token usage or an alternative approach that enables seamless generation of extensive multi-page documents without compromising the context or output quality.
I would appreciate your insights and any suggestions you may have to overcome this issue.
Thank you for your time and support.