The addition of memory was the main reason I signed up for an annual ChatGPT Team account back in May of 2024. What a train wreck. Within about a week, they announced that the vast majority of features available to Plus and Team users were being given away for free to anyone without paying at all.
On top of that, I had eagerly spent literally about 20 hours trying to get this feature to ingest my project data from 3 main projects I’ve been toiling away on alone for years. These are fairly small, indie projects ideated on in my spare time all by myself. The main reason it took so long was that I had to send some of the data as images interpreted by ChatGPT and then manually correct its OCR, and also, the data came from a TON of disparate sources for notes (Jira, Keep, GitHub, et. al.)
Initially, it seemed great! It was remembering my information, and synthesizing it just like I expected it to. I also got it to remember some formatting and response constraints for specific queries I sent, without having to put those into the custom instructions (which were already full).
Imagine my surprise when, without warning, my ChatGPT instance began becoming extremely forgetful. It said nothing to express this was happening. Instead, data formerly remembered was overwritten without any user notification whatsoever.
This resulted in me putting in probably ANOTHER 20 hours of uploading data and troubleshooting and testing, thinking it was ME that did something wrong. The process was terribly encumbered by the fact that the more recent instructions and information I’d provided was functioning as-expected, while older data rolled off the map without any communication or indication at all. This was confusing and muddied the waters for a good long while. Eventually, I figured out what the root cause was, and I was FURIOUS.
I had been a subscriber for less than a week, the main feature I needed didn’t seem to work beyond fairly trivial use (as we’re not talking like a thousand pages of data here, under 50 pages for sure). This was not a huge ask. Gemini already had indicated support (either imminent or already released; I can’t remember) for 1 million tokens of context.
All the OTHER features I paid for were now free to use. So, I asked for a refund, minus for the 4 or 5 days of use that I had already consumed. Their answer? Well, first, I had to deal with about an hour of communicating with a chat bot that kept leading me down paths I had already tried, and effectively gaslighting me about the problem I already knew fully was the issue.
I’d been told to wait for several days for an actual human being to consider my request. When I finally got in touch with one, they were remorseless and completely unhelpful. “Sorry, no refunds under any circumstances. We don’t stand by this product or our customers at all. Suck it up, buttercup.”
My year of membership is nearly over, there are innumerable worthy competitors in the space, there are truly superior local models I can run on bog-standard hardware with fairly simple setup for my own RAG that will create a far better index to search, and what is OpenAI doing about all this? “We’ve upgraded our paying customers to have 25% more memory capacity! You lucky ducks!”
Wooooow. 25% more than 32k tokens of memory. What a technological marvel. So that’s 40k now, you say? Gemini Pro is 1.5 million. Grok 3 is a million. Even Claude, which has been slow to keep up with the competition, has 200k tokens of context. Meaning, even the worst of the other models is 5x the context, and that’s effectively 4x the memory if we’re only using the last 40k of tokens for prompting.
This is just unacceptable. There’s no way I’ll renew my subscription. I’ve rarely been so disappointed with a product I paid as much for as this one.