Enhancing Load Times for Lengthy ChatGPT Sessions

Hi everyone,

I’ve been engaging with ChatGPT for an exciting project where I experimented with writing a book entirely through our dialogues. It’s been a fascinating journey, and I’ve noticed a significant improvement in the model’s ability to cater to my specific needs as our conversation history grew. The more we interacted, the more nuanced and helpful the responses became, which has been incredibly beneficial for my book writing project.

However, I’ve encountered a notable challenge as the chat log expanded. The longer the conversation history, the slower the loading times became. Eventually, it reached a point where it caused considerable delays in responses, and at times, my browser would freeze and necessitate a refresh.

To address this, I propose an enhancement in how ChatGPT and similar platforms manage lengthy chat histories. Instead of loading the entire conversation history every time a session is accessed, could the system be optimized to load only the most recent interactions? Then, as a user scrolls up, the platform could dynamically load earlier parts of the conversation. This approach might significantly reduce initial load times and improve the user experience, especially for those of us working on extensive projects with ChatGPT.

Implementing a more dynamic loading structure could be beneficial not only for individual creators and professionals but also in maintaining the performance integrity of the platform as a whole.

I’d love to hear thoughts from the community and the OpenAI team on this suggestion. Has anyone else experienced similar issues? Are there plans to enhance the user experience for lengthy sessions?

Thanks for considering my input!

Best,
Tim

4 Likes

I’d like to bump up this thread as it matters to me and I’m sure many others too.

I was excited to read today that GPTs can now learn from your conversation history. Wonderful news. However, I think it’s limited by the slow-down that accrues in lengthy conversations. New conversations need to be started sooner than later.

I hope there’s a way to improve it without negatively impacting the performance of the model and I’ll hold back my thoughts on approaches as I know little about the technical implementation.

It would be amazing if this can be improved.

4 Likes

Yes, when the conversation is lengthy, generating a response takes around 20 seconds or even longer. When chatting on a mobile device, this time can stretch to a minute?
In a very long conversation, it has become like chatting with a friend about anything. At that point, I just want to have a casual conversation and don’t necessarily need it to have a crystal-clear logic while considering the entire context. In such instances, I would prefer sacrificing some intelligence for a quicker response time.

3 Likes

This reply might be a bit dated, but I’ve just been ‘chatting’ with GPT40 and the suggestion I got was to inform it of context for specific chats. In my instance, I went back about a month or so and told it a group of chats for 4 weeks had all to do with ‘project-1’, then the next group with project-2 and so on. Then I simply told it to concentrate on the current ‘project-2’ and ignore all others when deciding context. This speeded up searches considerably

1 Like

Hey could you go in abit more detail on how you did this?