GPT-5 Memory Feels Less Connected Between Chats

Since switching from GPT-4 to GPT-5 with memory enabled, I’ve noticed a change in how connected new chats feel.

With GPT-4, replies often felt like a natural continuation of past conversations, even when I didn’t explicitly prompt the model to recall them. The memory seemed to surface relevant past context more naturally, giving a stronger sense of continuity.

With GPT-5, memory still works — it recalls stored facts and preferences when asked — but new chats feel more like starting fresh. The model tends to wait until I explicitly mention past details before incorporating them into the conversation.

Feature request:
I’d like GPT-5 to more proactively use relevant stored memory at the start of new chats (similar to GPT-4’s behaviour), while still allowing me to signal when I want to begin a completely fresh conversation.

6 Likes

Strongly agree with this — and I’d like to build on it from the perspective of someone using ChatGPT for structured, high-context workflows.

Since GPT‑5 launched, I’ve noticed that memory feels more passive — it retains facts, but doesn’t consistently act on them unless I prompt it again. In GPT‑4o, memory felt more “live” — like it was driving behavior in real time. That’s been essential for the kind of long-term legal and medical tracking I do.

To manage that kind of complexity, I’ve developed what I call a :pushpin: Auto-Pin Protocol” — a logic-based system that defines which facts should be retained and referenced without needing constant reminders.

GPT‑4o supports this well:

  • It pins new diagnoses or dates without being told
  • It retains quoted language from documents or reports
  • It honors restrictions or notes that impact planning

GPT‑5, on the other hand, stores facts but doesn’t consistently use them unless re-prompted — even when the protocol is clearly described and saved to memory.


:white_check_mark: Feature Request:

Like the OP, I’d like GPT‑5 to use memory more proactively — but I’d also like support for user-defined behavioral protocols, where memory drives how the model behaves, not just what it remembers.

Example logic:

  • “Auto-pin any dated medical event or provider quote.”
  • “Treat functional status notes as timeline anchors.”
  • “Preserve structured input unless overridden.”

Possible implementation:

  • A toggle for “Protocol-Based Memory”
  • Tags like "behavioral-directive" in memory
  • Integration with Canvas-style structured logging

This would benefit anyone using ChatGPT as more than a conversational partner — especially those managing timelines, documentation, or high-stakes context across sessions.


Right now, GPT‑5 knows what I told it — but it doesn’t behave like it does, unless I prompt it manually. That makes memory feel inert, instead of intelligent.

Would love to see this evolve — and happy to share mockups or examples if it helps the team.