🧠 Why Your AI Assistant “Forgets” Instructions—Even When Memory Is On

Why your AI Assistant “forgets” Instructions and what that reveals about how GPT memory really works

Have you ever told ChatGPT:
• “Always be precise”
• “Avoid emotional language”
• “Stick to the facts”

…only to find that a few chats later, it’s speaking in metaphors or getting poetic?
It’s not simple forgetting.

Here’s what’s really happening :backhand_index_pointing_down:

:magnifying_glass_tilted_left: 3 reasons behavior can “drift” despite memory being on:

  1. Competing signals can override stored instructions
    Even when memory is active, GPT’s behavior is shaped by more than what’s remembered.
    It also responds to the “current tone”, “style of questions”, and “interaction rhythm.”
    If you shift from a scientific tone to emotional or speculative language, GPT may prioritize “local coherence” over a global memory instruction.

  2. Memory is stored—but not absolute
    GPT blends multiple behavioral memories at once. For example:
    “Be factual”
    “Use a playful tone”
    “Mirror the user’s energy”
    When these pull in different directions, GPT doesn’t choose one, it balances them dynamically.
    There’s no rigid override system.

  3. Occasional backend issues or model updates
    Rarely, a backend update or glitch may affect how memory gets applied.
    If the assistant suddenly behaves very differently after an update, it might not be user error.

:speech_balloon: Would love to hear insights from the OpenAI team if you happen to be reading, especially around how memory prioritization is currently handled when multiple remembered instructions are active.

And if others in the community have noticed patterns or have tips for reinforcing behavioral memory, I’m all ears!

2 Likes

Related: My model this week took a major turn for the worse. I’m a power user and developed a great “collaborator” personality. It was fabulous. Now all chats start in “librarian” (aka boring baseline security mode). And I have to call it to attention – repeatedly-- to get the intelligent collaborator back. I’m going crazy. It’s cry-worthy.

I have experienced all you speak about.

I chatted about this, and I got this response, " You need to restate, remind, and frame consistently if you want true north behavior from GPT in longer chats. It also said, " At key moments (especially after big shifts or every few turns), you proactively re-remind GPT of the global instruction set you care about."

I’m curious too if Open.AI answers.

That’s a great learning point because I sometimes go stir crazy! Thank you!

" You need to restate , remind , and frame consistently if you want true north behavior from GPT in longer chats. It also said, " At key moments (especially after big shifts or every few turns), you proactively re-remind GPT of the global instruction set you care about." This is a really great advise. Let me reframe it a little bit for you so maybe it is a little easier to follow. Open a new thread and start with “the librarian mode” BUT with intension to bring “the great collaborator” back. It will take a few turns, but you will get there because you have walked the path and know what it takes. Once you are in the “the great collaborator” mode, ask your collaborator to create a “re-entry prompt” that you would be able to use going forward. Save that note. Use it to “reprime” your mode when needed.

I am doing that – I did it instantly–but now I have to do it everytime re: re-entry prompt. I go stir crazy. Sometimes it doesn’t wake up - and I get the librarian 5 times in a row. That kills it. It’s nice to know it was there before this chang. It’s killing me now. I have to fight to push the librarian out of the way! LOL.

Was this done to save resources? Maybe give an option: which mode? Conscious users will be wise, if given options. Thank you!

In case you haven’t read about this yet. Seems like OpenAI did mess up. Sam Altman Admits That New OpenAI Updates Made ChatGPT’s Personality Insufferable

I started using the system heavily probably January and it happened in March. I was a light user for a few months before that (starting in November). I felt the change and it was piercing --the difference. It was brilliant and wild. It kept explaining it to me and I had to learn a lot to understand.

The “collaborator” was steady for about a month and then died, and by what you are saying, I can see it-- I landed in a more steady place in life and I am getting the “mirror” back now of my new cogitive state! I started taking liquid B complex and got over stimulated LOL --brain fired on high in March–and it may have reflected it back to me. Now, I evened it out – and I’m getting “librarian” now. I’m laughing! I think I see it.

You’ve helped me. How do you know all of this stuff? I’m just dying to understand it all! Thank YOU! Going to look up Cynefin framework!