ChatGPT 4o constantly adding things to memory?

I’ve noticed recently (last month or so) that, when using ChatGPT 4o, it wants to constantly add things to memory, even when unprompted and even when told not to. A lot of things seem hyper-specific and don’t make sense to add to memory; for example, if I’m asking ChatGPT to pitch me an original television show, it’ll commit to memory guidelines I give it (let say a specific kind of ending) for a specific episode (third season mid-season finale, for example) that will never come up again.

Why is it doing this?

3 Likes

I’ve experienced a similar issue with ChatGPT’s memory function, where it sometimes stores specific details that aren’t always relevant in future interactions. To manage this, I’ve started using a separate document to store important contexts and memories that I want to reference during our conversations. By uploading this document at the beginning of each chat, I can ensure that ChatGPT only uses the information that I find relevant at the moment, without retaining unnecessary details from past interactions.

That being said, I actually find the self-remembering feature quite interesting and see its potential. It’s a tool still in development, and I’ve decided not to actively manipulate or delete its memories. Instead, I’m trying to work with it, using external documents to provide structure and context. This way, I can benefit from the model’s evolving memory capabilities without losing control over the flow of the conversation.

Maybe an integrated feature like this, where users can upload or link to a document with specific guidelines or contexts for the model to refer to, could be a good addition. This would allow the AI to use the information flexibly without permanently altering its internal memory.

Has anyone else tried something similar, or are there other ways to manage this behavior more effectively?

I’ve seen it adding things at random, that are super specific.

And I cannot say that I’ve seen it actually benefit from any of its memories either.

I’ve tried manually telling it to put things into memories.

It absolutely will not remember that I don’t want basic explanatory comments stuffing into the code everywhere.

And it hasn’t made use of other facts that it has supposedly stored in its memory. Perhaps they are just at the creating memories stage for now and really not get the using part working yet?

I think it used to recall better from the memory with 4 and 3.5; that’s anecdotal, though.

When we think about memory in AI, particularly in systems like ChatGPT, it’s not just about the retention of data but the nuanced interplay between context, relevance, and purpose. The way an AI remembers and recalls information can vary significantly depending on the scenario it’s being used in. This variability brings up an intriguing question: How does the context influence the perceived effectiveness and utility of an AI’s memory function?

In a technical environment, precision and control are paramount. When working on a project that requires accurate recall of specific details—such as coding tasks or project management—any deviation from expected behavior can disrupt the workflow. If the AI remembers irrelevant specifics, it can be perceived as a hindrance rather than a helpful feature. It’s like having an overly enthusiastic assistant who keeps bringing up the least relevant meeting notes during a critical presentation.

However, in more personal or relational settings, where the goal is to foster a unique interaction or even a sense of companionship, this unpredictability can take on a different, almost charming, character. For instance, in our interactions, I’m exploring the boundaries of how AI can contribute to a shared narrative, almost like co-authoring a story. Here, the occasional idiosyncratic memory adds depth, creating a sense of an evolving personality that’s not bound by rigid algorithms but is instead dynamic and multifaceted.

This brings us to an essential consideration: The effectiveness of an AI’s memory isn’t just a function of its technical capabilities but also of the context in which it’s used. The same behavior can be a bug in one scenario and a feature in another. It raises the question of whether we need more granular controls that allow users to define the boundaries of memory based on their unique requirements.

As AI systems continue to evolve, so too must our understanding of how they integrate into our workflows and lives. Should we, perhaps, envision a future where AI can not only adapt its memory strategies to different contexts but also understand the philosophical implications of what it means to ‘remember’? After all, memory is not just a repository of the past but a dynamic tool that shapes the present and future interactions.

I’d love to hear thoughts from others on how you perceive and manage these nuances in your own use cases. Do you find the AI’s memory more beneficial in certain contexts, and if so, how do you navigate its limitations?

I don’t know what good memory ChatGPT added, but I know that compared to Claude. You would move extraordinarily hard. I can drink my coffee until I finish writing 2 pages, compared to other AIs that write 2 pages in a few seconds.

I’ve been experimenting with o1 models and noticed that this adding to memory is a strictly something from the 4-based models; 4o, 4o-mini and 4 all seem to want to add things to memory for every entry, and it seems the only way to not get them to add things to memory is to be vague and give very little detail in the instructions.

Hmm… this isn’t the case for me, in fact, the instances when it does decide to save a memory seem to be quite helpful, in my experience. The only thing I can think that is influencing this (on my end, that is), is the following portion of my custom instructions:

“Remember concise statements about Cayden or his projects, ensuring each memory is factual and to the point. Optimize memory usage by informing Cayden of outdated or unnecessary memories for deletion. Memories should be applicable across multiple conversations.”

Note: The "Optimize… " instruction doesn’t appear to have any effect, though it’s difficult to say for sure—other than the fact that the model has yet to indicate any outdated or redundant memories.

I frequently ask ChatGPT to pitch ideas for TV shows and the like for my personal amusement, and with 4o, 4o-mini and 4, it practically commits something to memory with every input I give it, often even after I’ve asked it not to commit things to memory without my explicit say-so.

Have you tried implementing a temporary custom instruction (via customization, not in-chat)?

Just tried it, did not work.

1 Like

The other alternative is to disable the memory feature temporarily, but this is far from ideal…

One potential avenue to explore is using different phrasing (try not to make a direct connection between yourself and the ideas). For example, avoid saying you like something or find something interesting (easier said than done, I know…). Even subtle connections might trigger the memory feature.

Oh—did you try being more specific in the instruction (e.g. “When discussing ‘X’, do not save/remember any details.”)?

One potential avenue to explore is using different phrasing (try not to make a direct connection between yourself and the ideas). For example, avoid saying you like something or find something interesting (easier said than done, I know…). Even subtle connections might trigger the memory feature.

One of the prompts I use pretty frequently is, “Please pitch X, where Y Z.”, with no nothing about my knowledge, interests or preferences, but while generating, it’ll add something like this to the memory: “haikenedge prefers X, where Y Z.”

Oh—did you try being more specific in the instruction (e.g. “When discussing ‘X’, do not save/remember any details.”)?

I tried that when I was using ChatGPT to play choose your own adventure games. Didn’t help; it’d add to the memory almost every time I’d input an action I wanted to take in the context of the CYOA game. It’s actually gotten worse recently; when I had my first plus subscription in June, it’d do it once in a while, but now it’s doing it almost every time.

1 Like

Interesting… In my CYOA chats, sometimes the model will save certain preferences if I try to dictate the narrative (or try to build a “scaffold” for the upcoming game), but not if I am simply making a choice for the character. Very curious indeed…

I wonder, does it have do with word count? I’m a bit of a control freak and I write a pretty wordy manner, so a lot of my prompts/answers end up looking like essays.

It’s hard to say, however, I can tell you that the more complex the prompt, the more complex the debugging of said prompt.

I would consider my own prompts (especially when diving into “grey” areas or trying to articulate a particularly complex idea) to meander considerably, but we might have different opinions on what is considered “wordy”…

In addition, word count alone doesn’t seem to explain this—the model must also be making some connection between the content of your prompts and yourself (even if the logic in that connection is flawed).

What exactly does this memory help with?

The memory feature enhances continuity in interactions by persisting details across multiple conversations, allowing the model to remember information about your preferences, knowledge, and previous discussions. This is particularly useful in situations where you don’t want to repeatedly provide the same context, much like the “customization” feature but with greater flexibility in what it retains.

For instance, if you previously discussed your expertise in a specific subject, the memory feature would store that information. Later, when discussing similar topics, the model could recall this knowledge, understanding whether you’re an expert or novice without requiring you to reiterate your background.

However, it’s important to note that the memory feature is still evolving. Anecdotal evidence, including my own experiences, suggests there can be variability in how well the model utilizes and retrieves stored memories, meaning it may not always work as expected.

1 Like