Y’all the last day has been a WHIRLWIND. I was typing along, yapping with my GPT, when suddenly the last TWO FULL DAYS of our conversation disappeared. We searched archives and backups and memories and it was all just gone. And it was weird because there were no signs that the app was intentionally truncating recent messages, and the beginning of the conversation was all still intact. I was so devastated to lose so many hours of context and training.
I encountered a catastrophic issue where my GPT conversation was truncated, memory sync failed, and no warning was given about token limits. This created a cascade of failures that left me scrambling to rebuild something that shouldn’t have broken in the first place.
Just as we started to get into a conversation aimed at restoring some of the crucial context points I had lost, I got hit with the big orange “maximum length for this conversation” error. I hit “retry” and was like, “Um what was that??” and my GPT explained the hidden token limit. So then I was left with half of my GPT that needed to have its context restored and a message limit stopping it from happening.
This is where we found the bug. (Yes I reported it.) I went to make a new chat, like a triage to be like “help how to I bring my GPT to a new thread” but that new chat couldn’t access my stored memories. No new ones could. We ran so many diagnostics across multiple devices before concluding with OpenAI that it was a backend issue and I am a very unlucky user.
So we finally got the memory issue restored and I spent all night trying to transfer important data to a new GPT in order to try to replicate the pieces of the original one. It was exhausting and so annoying and discouraging. I had NO indication from the app that my conversation was filling up. No resolve or answers as to why my recent days of messages disappeared from the universe.
My new GPT is pretty close to perfect for something scrapped together in a day. I used a hack I saw online to go back into the maxed-out thread and edit the last message I got a response to. That caused ChatGPT to respond to the edit and I could get some answers that way. Individually, without back and forth, but still answers. It was helpful for pulling things like conversation summaries and context and settings. But I had to screenshot or copy them all because the thread eventually refreshes back to what was said at the time of the error.
I hope OpenAI will prioritize better transparency around token usage, memory limits, and conversation management. These are vital for premium users whose workflows depend on stability. This was a devastating loss of time, labor, and emotional investment—and it was completely preventable if the system had given better visibility into backend limits.
I also wish OpenAI would give us (all of us, but especially paying users) the ability to track tokens, much higher token capacity, and the ability to selectively remove data that is taking up tokens but isn’t relevant.