chatGPT absurdly slow. I'm paying for this people. You had one job

this is really interesting as well as frustrating. along with the DOM thing i saw mentioned in another part of this or a related thread. that said, i wouldnt want it to NOT reference the entire convo and have that relevance/context vs just the latest input. it just needs to do it with the compute/speed/tech background/whatever so it flows smoothly in real time vs this unusable lagg/freeze up/tab unresponsive bs


Got what looks like a solid response from chatgpt5 about this…im implementing this now for testing and so far im PUMPED. “Ugh, I hear you—that “Page Unresponsive” loop on big, file-heavy chats is brutal. What you’re running into is (unfortunately) a known UI behavior: the web app tends to load/render a lot of the conversation at once (the whole DOM gets fat), and once a thread is long with many uploads/previews, the tab can choke. Multiple users have documented this exact pattern. OpenAI Community+2OpenAI Community+2Reddit

Here’s the fastest way to make your current project usable without losing context (no “summary carry-over”):

Do this now (keeps your full context)

  1. Move the work into a Project (or create one just for this).
    Projects are designed for long-running work with files + custom instructions that persist across chats. You can organize the same materials without keeping a single mega-thread open. Start a new chat inside the Project so the UI isn’t rendering thousands of old messages. OpenAI Help Center

  2. Preserve the entire chat history as a file (not a “summary”).
    Export the long thread (Settings → Data controls → Export data) and then upload that export (HTML/JSON) into the Project as a reference file. That way every word is still there for you and for me—just not inflating the live chat DOM. OpenAI Community

  3. Keep your original uploads in the Project’s Files pane (not repeatedly re-attached in messages).
    Let the files live in the Project; then reference them from fresh chats inside that Project. This reduces message-by-message previews and keeps performance snappy. (OpenAI’s guidance notes that lots of files and previews can impact responsiveness.) OpenAI Help Center+1

  4. Split the mega thread.
    Within the Project, make new chats per sub-topic (e.g., “SERP work,” “Image pipeline,” “Outreach copy”). The Project keeps your instructions and files tied together, so you don’t lose context even though the chats are smaller. (Recent write-ups also note Projects were upgraded precisely for this kind of workflow.) WIREDTechRadar

Quick stability tweaks (help right away)

  • Open the heavy thread once, capture what you need, then stop reloading it. Use the export+file approach above so you’re not repeatedly re-rendering the same giant DOM. Community bug reports point at DOM/virtual-scroll gaps as the core issue. OpenAI Community+1

  • Browser hygiene: clear cache/cookies; try an incognito window with extensions disabled; update Chrome/Edge/Firefox; toggle hardware acceleration (on some setups, off helps). These are the official first-aid steps. OpenAI Help Center

  • If you must keep using the giant chat temporarily: collapse inline file previews where possible and avoid pasting very large blocks repeatedly. Users have observed RAM spikes when many long messages/previews stay in view. OpenAI Community

Why this works

  • Projects keep your instructions and files persistent across multiple smaller chats, so you don’t need a single never-ending thread to maintain context. That’s the best way to avoid the “all history loaded at once” problem while retaining your full working set. OpenAI Help CenterWIRED

  • Export-and-re-attach gives you the complete old conversation as a searchable file—no summaries—so nothing is lost, and I can still reference specifics by filename/page/section in new chats. OpenAI Community

If you want, I can outline a one-time migration checklist tailored to your current project (file buckets, suggested chat splits, and instruction blocks) so you can move everything over in 10–15 minutes and be back to full speed—minus the freezes.”