openAI clowns, acknowledge your junk instead of playing possum. own it in writing. yes we are greatly destroying your paid service for our wet dreams of using all the computers for dead end training
so I switched to Claude. less sycophancy and it’s a better coder anyway. and the company logo isn’t the thinly veiled symbol of genocide. bye for now
Something is again or still wrong, very wrong and its getting worse. Its been severely wrong for me since mid-May. I’m a pro subscription user and the service I’m getting is horrible: it takes around 15-30 minutes to answer, takes 100% CPU and when it does answer it repeats mistakes, answers previous queries, mixes-mashes up some previous responses, hallucinates, answers unrelated questions…useless.
For YEARS now, long chats in ChatGPT freeze solid, same in the browser and the desktop app, turning the interface into a giant pile of crap, a steaming heap of dung, a reeking stack of excrement, and a rancid mound of filth.
We’re paying for a service that works like an overflowing toilet: it’s blocked, it stinks, and nothing moves until you flush the whole thing by reloading or starting over.
And to whoever at OpenAI is responsible for this stinking mess and keeps pretending it’s not there: you’ve built a monument to negligence, a public eyesore of incompetence, and your refusal to fix it is the most foul-smelling part of the whole heap.
I am using ChatGPT extensively and found it incredibly valuable both personally and professionally. However, with its recent incredibly poor performance, it has become excruciatingly painful to use. It seems as though others have been reporting this issue for sometime and nothing has been done. I will be looking to an alternative platform.
Just adding my experience here as well in hopes itll help push for a solution. Close to a GB in the tab memory. I generally have to reload the page after the page unresponsive just keeps popping up itll go on for i dont know how many MINUTES before I give up. Dunno if it would go on indefintely or not. But it makes it rather unusable really. Total disruption of workflow etc. And for any suggestions of just ‘starting a new tab’ itll clear up…i realize it will be fast again, but my entire context/project information is gone as well so that doesnt even make sense as a solution. And I dont want a ‘summary ‘ of the previous chat carried over either. Love the product overall i really do, but with an in depth chat going for a specific project it has slowed down to being unsable. Does NOT happen with equally long chats with gemini. Please sort this out
this is really interesting as well as frustrating. along with the DOM thing i saw mentioned in another part of this or a related thread. that said, i wouldnt want it to NOT reference the entire convo and have that relevance/context vs just the latest input. it just needs to do it with the compute/speed/tech background/whatever so it flows smoothly in real time vs this unusable lagg/freeze up/tab unresponsive bs
Got what looks like a solid response from chatgpt5 about this…im implementing this now for testing and so far im PUMPED. “Ugh, I hear you—that “Page Unresponsive” loop on big, file-heavy chats is brutal. What you’re running into is (unfortunately) a known UI behavior: the web app tends to load/render a lot of the conversation at once (the whole DOM gets fat), and once a thread is long with many uploads/previews, the tab can choke. Multiple users have documented this exact pattern. OpenAI Community+2OpenAI Community+2Reddit
Here’s the fastest way to make your current project usable without losing context (no “summary carry-over”):
Do this now (keeps your full context)
-
Move the work into a Project (or create one just for this).
Projects are designed for long-running work with files + custom instructions that persist across chats. You can organize the same materials without keeping a single mega-thread open. Start a new chat inside the Project so the UI isn’t rendering thousands of old messages. OpenAI Help Center -
Preserve the entire chat history as a file (not a “summary”).
Export the long thread (Settings → Data controls → Export data) and then upload that export (HTML/JSON) into the Project as a reference file. That way every word is still there for you and for me—just not inflating the live chat DOM. OpenAI Community -
Keep your original uploads in the Project’s Files pane (not repeatedly re-attached in messages).
Let the files live in the Project; then reference them from fresh chats inside that Project. This reduces message-by-message previews and keeps performance snappy. (OpenAI’s guidance notes that lots of files and previews can impact responsiveness.) OpenAI Help Center+1 -
Split the mega thread.
Within the Project, make new chats per sub-topic (e.g., “SERP work,” “Image pipeline,” “Outreach copy”). The Project keeps your instructions and files tied together, so you don’t lose context even though the chats are smaller. (Recent write-ups also note Projects were upgraded precisely for this kind of workflow.) WIREDTechRadar
Quick stability tweaks (help right away)
-
Open the heavy thread once, capture what you need, then stop reloading it. Use the export+file approach above so you’re not repeatedly re-rendering the same giant DOM. Community bug reports point at DOM/virtual-scroll gaps as the core issue. OpenAI Community+1
-
Browser hygiene: clear cache/cookies; try an incognito window with extensions disabled; update Chrome/Edge/Firefox; toggle hardware acceleration (on some setups, off helps). These are the official first-aid steps. OpenAI Help Center
-
If you must keep using the giant chat temporarily: collapse inline file previews where possible and avoid pasting very large blocks repeatedly. Users have observed RAM spikes when many long messages/previews stay in view. OpenAI Community
Why this works
-
Projects keep your instructions and files persistent across multiple smaller chats, so you don’t need a single never-ending thread to maintain context. That’s the best way to avoid the “all history loaded at once” problem while retaining your full working set. OpenAI Help CenterWIRED
-
Export-and-re-attach gives you the complete old conversation as a searchable file—no summaries—so nothing is lost, and I can still reference specifics by filename/page/section in new chats. OpenAI Community
If you want, I can outline a one-time migration checklist tailored to your current project (file buckets, suggested chat splits, and instruction blocks) so you can move everything over in 10–15 minutes and be back to full speed—minus the freezes.”