I mean, it’s fascinating OpenAI, considering what it does, it can’t make a chat the loads only the last 10 replies dynamically with lazy load instead of dumping the entire chat on the browser. It helps no one.
err they did that before … maybe someone had some complains about that?
I actually slightly fixed this (for coding projects) by using nativefier build and an injection script that deletes all but the last 20 articles in the main. It seems to come down to large (600+) articles – an article being one message in the chat bogging down the browser DOM.
I tried doing a quick and dirty VirtualList for the chat but tbh its too much work and I’m busy in a project.
Even after deleting the old chats and orphaned nodes submitting a reponse starts to slow down. I know react shadow Dom should slow with large load, but I think it does something like it.
I’ll go back to tweaking the cleaner script and the virtualizer script as a hobby but I really shouldn’t have to. The desktop app and the ios app seems unaffected but I’m on a 2019 mac intel and they don’t have a desktop app for me.
Hey fellow makers!
I just wanted to share a tool I think is really cool. It’s called Json2Media.com and it lets you create custom AI assistants without being tied to a single LLM provider.
With this platform, you can:
- Build and customize your own AI assistants
- Define models that fit your needs
- Connect your assistants to your models with ease
- Get access to free tools to help you get started
I used it and it’s still a work in progress, but I think it has a lot of potential. Give it a try I think few of you may like it.
Now that they have projects I have been creating a Project Summary Chat that houses an up to date summary of everything it knows about the project so far, Then I use it in the prompt of a new chat. It seems to speed things ups.
How is this still not fixed but has become much worse to the point of unusability?
Here’s how I convinced Gemini Advanced that OpenAI programmers are idiots, compared to Google programmers, because a typical Gemini tab uses less than 1% of the memory a typical ChatGPT tab uses.
I have the same issue: 1.6 GB of RAM usage in the single Chrome tab that is ruunning the GPT. When I use the Native app on my Windows it’s a similar situation if not worse