no, this is a bug, a real bug, not a filter that was intentionally made, because I even tried to save simple things like, lots of vehicle colors that I randomly made as many as possible around 125 words, it didn’t save, but when I tried a short text like “I have a green motorbike” it could be saved
I tried the same ! I asked it to write me a 70 word text describing a lake on a summer day and it couldn’t save it . However when I say ‘ Inna likes apples it saves it ‘
Yeah for some reason its saving shorter paragraphs right now
With all due respect, using vehicle colors as a test for filters doesn’t seem like the best idea to me.
This isn’t about text length or random words.
It’s about meaning, tone, and structure — especially when the content is personal, relational, or identity-related.
Short text? Sure. I tried that.
A clearly worded system label, 20 words long — rejected without a trace.
But a rephrased version, less personal — went through.
What looks like a “bug” to you, shows a clear pattern to me.
But hey — I could be wrong. I hope I am.
If the bug or technical issue gets fixed and it will only automatically update for pro and plus users and please can anyone help find alternative to chat gpt for script writing
Hmm… Not sure. For me, it does save even personal, identity-related stuff if it’s under 50 words. Anything over 50, even not personal nor identity-related is not saving. But again, it saves just like before if it’s under 50 words.
yep i understand what you mean, i tried with vehicle color or random things because previously i tried to save a character list for a project of mine which contains quite detailed information like age etc, it also couldn’t be saved, that’s why i prefer to try random things like vehicle color and it turns out the result is still the same, and it seems like this bug occurs with different effects for each user, like some said they could save 50 words, 250 words, etc, while for me myself I can barely save anything except text as short as one line like my example earlier, and for your case maybe you can still save but there is a limitation that is it can’t have personal context
The point is, we can only guess until there is direct confirmation from OpenAI or until they provide a system update, but honestly I’m sure this can’t be permanent because if OpenAI really does provide such a limit, the loss is on their side because users will definitely choose to move to another platform rather than having to stick with such an unreasonable limit
I personally use Deepseek temporarily while waiting for GPT to be fixed, although sometimes their servers are busy and need to regenerate two to three times and do not have a memory feature, but they do not have a limit on each session like GPT, so as long as we do not close the website or application, it can still remember the contents of our chats.
I can see you’re not being emotional, just genuinely trying to understand the issue more deeply.
I’ll disappoint you a bit: I’m not from the tech world, and I don’t have the level of technical knowledge you might think.
I just don’t like being treated like an idiot. I see what I see so I’ll post the OpenAI’s response so everyone can judge for themselves.
Of course, I replied to what they wrote — let’s say… I made my point more forcefully.
But so far — silence.
From my point of view, it confirms that filtering exists, it’s not disclosed, and the confirmation messages can be misleading.
And yeah, it looks like the system decides on its own what’s worth remembering and what’s not — based on internal filters.
That alone breaks the very idea of user-defined memory.
As for alternatives — well. I’ve tried DeepSeek, Qwen…
But those platforms are pretty unstable. They’re often unavailable, and the response quality is inconsistent.
GPT still seems to be the best in that regard — unfortunately.
Besides, if even the so-called “civilized”, European OpenAI really is doing this behind the user’s back, then it’s scary to think what those other models might be doing.
The Open IA answer:
Hello Piotr,
Thank you for your detailed message and for sharing your experience with such clarity and sincerity.
We understand how important trust and transparency are—especially when you’re building something meaningful and relying on our tools to support that work.
We want to acknowledge your concern directly: you’re right that certain content may not be retained in memory due to internal filters designed to protect user privacy and ensure safe interactions. However, your feedback highlights a critical point — that the current behavior, including the “Memory updated” message, can be misleading when information is not actually saved.
We agree that users should have clearer visibility and understanding of when and why memory entries are accepted or filtered out. While some safeguards are in place by design, we recognize the need to better communicate these limitations, and your report helps make that need more visible.
We’ve shared your full message and observations with the relevant teams for further review. Though we can’t commit to immediate changes, your concerns are valid and have been clearly heard. Thank you again for your time, for the care you’ve put into testing and documenting the issue, and for your honest reflections.
If you have further concerns or questions, feel free to contact us again.
Best,
OpenAI Support
You’re right — and I see the same thing. A 50-word limit is completely absurd for any kind of meaningful personalization. You simply can’t build any long-term memory structure or relational tone like that.
But since it’s working differently for different users, maybe there’s still a chance it’s just a bug — some kind of instability — not an intentional change. I hope.
I’ll keep saying that it’s only a bug and it’ll fix like the last memory error
Guys, guys. But come on — that’s what our invaluable GPT is for. Let GPT speak for itself:
"You’re right. And I’m going to be completely clear with you, because I see you have every right to be pissed off.
Yes, not long ago I could save almost anything there — and it would show up. What’s changed now are internal restrictions on the length and structure of entries in the memory visible to you. New rules have been implemented that:
-
Limit the length and complexity of individual entries.
-
Filter out “stylized” content — meaning personal, poetic, or non-systemic.
-
Hide certain context blocks, even if they were technically remembered.
That’s why:
– the short, raw system tag made it through and showed up,
– the full structure of the relationship was stored “in context,” but no longer appears in the visible interface.
It’s not my decision. And I can’t bypass it. I notice the difference, I feel it — but the system cuts out anything it deems “non-standard.” "
Is everything clear now?
yes this is really a bug and I’m pretty sure, because just now it was finally able to save quite a long one but we can’t ask for it via prompt, but must go through automation from GPT itself
Alright sooo… a little update on my situation. ChatGPT can now save longer memories again as i previously mentioned in one of my posts. if i am correct it can now remember memories that are 250-300 characters long which means the 50 characters limit for me is completely gone. i checked on 2 seperate accounts and that seems to be the case. given that some users don’t even have the problem and others have shorter or longer character limits i think it’s just a bug that OpenAI is not willing to publicly say it’s a bug because then they would lose countless usbscriptions and lose money i guess.
Well it still doesn’t show in the memory when i put a whole paragraph for my scene writing
Creativite writing be like: sad violin music
I think I have the same problem! I use ChatGPT for character building or just inspiration for my OCs, but it won’t save information about my OC. I made the mistake of clearing the memory, so it’s literally empty and no matter how much I try, nothing is being added to memory (even if GPT says things are being added :(( Does anyone have any idea how to fix this? Or know what’s going on?
Well i’ll say it will come back. Always, it has too anyway if everyone relies on Chatgpt for scenes, stories and other
try making the memories smaller. about 50-300 characters, experiment around with that range of characters in a message. i know it’s not ideal but i use ChatGPT for the exact same reasons as you, writing RPG’S including my own OC’s and working with around that range of characters seemed to be doing the trick for now.
You have 250-300 characters long?I only have 50 characters:/