The conversation is too long, please start a new one

There’s not so much to do about that by now. The method that I’m using is to create a custom gpt 4 version in the Builder and send the PDF or TXT file to his knowledge in the creation tab.
It goes pretty well with a single document (like an application guide manual) as the GPT can remember his ‘‘Knowledge’’ after every message and provide custom and better responses (Making sure to initially in the chat with the specific GPT, ask him to ‘‘read and analyze your Knowledge’’, he will analyze the material sent to the Knowledge tab and will keep it on mind).

For a lot of pages, it becomes very difficult. I was trying to make a build adapted for writing a specific light novel series. I managed to send 2000 pages of content to his ‘‘Knowledge’’ tab and it was actually ‘‘analyzing’’ when I asked it at the start of the chat, it took a long time and ended up giving that error message when it reached the token limit.

I believe the only thing that can be done is to use the api interface, in the gpt-4-1106-preview model (gpt 4 turbo) due to the larger context window, while the version does not arrive on the regular website. But be careful, because in the API you will be charged for each token, so if you send 1000 pages of content, it will charge you the equivalent to analyze everything. I didn’t get to see if the API environment also allows you to ‘‘train’’ a private build of the model, so you could have worse results there with a better model and still be charged.