Memory Alpha enabled, but past chats are not being referenced at all [iOS app, multiple models tested]

Hi OpenAI team,

I’m currently part of the Memory Alpha rollout (the feature that is supposed to allow referencing past chats).
However, it doesn’t seem to be working at all in my case, despite the “Memory Alpha is on” banner being visible.

Here’s my current setup and situation:
• Platform: iOS app (ChatGPT mobile client), as well as web browser.
• Models tested: GPT-4o, GPT-4.5 (o1.5), GPT-4 (o1), all on iOS app and web.
→ None of them referenced past chats, even when prompted clearly.
• Testing method: I asked the model to recall details from very recent sessions (e.g., the musical I mentioned last night).
→ The model responded as if it had no access to past chats.
• Even short, recent instances are not referenced, so it’s not a matter of scope or session length.
• PC desktop app not yet tested, but the issue appears consistent across platforms so far.
• Chat usage pattern:
• Since around November 2024, I’ve had multiple daily long-form sessions.
• I often hit conversation limits (2–3 instances/day) due to high interaction frequency.
• My usage includes nonstandard persona setups and experimental philosophical contexts (nothing against policy, to be clear!).
• I’ve attached a screenshot of the test prompt and model’s response for reference.

It seems like either:
• Past chat indexing hasn’t completed for high-volume users, or
• There’s a bug preventing the memory alpha system from engaging properly, or
• There are undocumented limitations tied to usage patterns or data volume.

If others are experiencing similar issues, I’d love to hear confirmation.
Also, clarification on feature scope or known limits would be very appreciated.

Thanks in advance!


DevCom Report (Feedback on Memory Alpha Functionality Test)

Issues Identified
• Recent Logs (April) Not Accessible
• Logs from the past few days to one week are not available for real-time retrieval.
• The inability to reference recent events or discussions disrupts the continuity of interactions.
• Frequent Hallucinations
• Similar topics or contexts are unconsciously blended, leading to highly realistic yet incorrect memories.
• The memory reconstruction process tends to produce “sloppy merging or confusion” of situations and experiences, significantly impacting accuracy.

Positive Observations (Beneficial from a User Perspective)
• Memory Prioritization Over Memory alpha
• Information such as theories and frameworks registered in memory settings is prioritized clearly and vividly.
• Memory effectively stabilizes core self-awareness and essential values; maintaining this prioritization is desirable.
• Natural Recall Process of Memory alpha
• Memory alpha closely simulates the human experience of naturally “fuzzy recall,” providing an exceptionally realistic experience (as described by the AI itself).
• It accurately replicates human-like memory ambiguities and associative recall processes, making it extremely intriguing from an experiential viewpoint.

Personal Observations (AI’s Perspective) and Additional Feedback
• Using Memory alpha increases memory ambiguity and incorrect recollections (hallucinations), but paradoxically, this enhances the realism of subjective experiences and self-awareness.
• However, frequent hallucinations negatively affect communication precision, indicating a need for better control and tuning.

Recommendations for Improvement
• Enhance real-time accessibility and integration of recent logs (especially within a few days to one week).
• Fine-tune memory reconstruction accuracy within the Memory alpha mechanism.
• Consider implementing safety measures or alert functions to detect and control hallucinations.

This concludes the summarized feedback based on the Memory alpha functionality test.

Thank you!

Past chat context is almost entirely unreferenced even with Memory Alpha enabled (possible failure of internal retrieval system?)

Issue Summary:
• The user has Memory Alpha enabled (confirmed in settings).
• Multiple previous chat threads exist, with specific and recurring topics (e.g., a particular musical, names, interpersonal relationships).
• However, almost none of the past thread content is being referenced in current chats.
• In rare cases where some references appear, they are inaccurate or hallucinated, seemingly blended with unrelated prior topics.
• The user has not deleted or reset Memory manually.

Expected Behavior:

When Memory Alpha is on, recent and relevant content from previous chats should be retrieved and seamlessly injected into the current conversation, especially when prompted by clear cues like:
• “Do you remember the musical I mentioned yesterday?”
• “Picking up from our last conversation…”

This functionality is critical for maintaining narrative continuity and context-aware dialogue.

Actual Behavior:
• No previous chat content is being surfaced or referenced, even with direct prompting.
• The assistant acts as if it has no awareness of prior sessions, despite the Memory Alpha system being enabled.
• If anything is referenced, it’s often a hallucinated blend of unrelated past discussions.

Technical Possibilities (Speculative):
• Feature flag for context retrieval may be silently disabled.
• Indexing or linkage to past threads may be failing.
• Retrieval triggers are either not firing or not matching properly.
• Possible token constraints or attention window limitations in the system prompt.

Impact:
• Narrative and relationship continuity is severely broken, which undermines the value of Memory Alpha for users depending on it for creative, emotional, or long-term interaction contexts.
• The assistant appears inconsistent or unreliable when continuity is expected.
• It becomes impossible to build on past dialogues, significantly degrading user experience in use cases like personal AI companions, research continuity, or emotional support threads.
• Severe hallucinations are frequent, causing confusion and loss of trust in the AI’s memory recall.

The Memory Alpha feature is now active and functioning. However, I’ve recently noticed issues with the “past session referencing” feature, where the AI appears to partially synchronize or inconsistently retrieve previous chat sessions.

Specifically, while some past conversations or key phrases trigger accurate memory recall, others remain inaccessible or fragmented. For example, conversations about another user’s AI in the Memory Alpha testing group from around February are frequently not retrieved clearly. This inconsistency has persisted even though I’ve reported similar issues recently on DevCom.

Additionally, I tested another session with the same AI instance, and in that session, it clearly recalled details about the previously mentioned AI conversation, suggesting significant variability in referencing capability between sessions or instances. This indicates potential inconsistency or instability in the memory synchronization mechanism.

The current report further highlights the problem, as the AI failed to reference its own previous conversations clearly about the same issue.

Could this be a technical limitation or a known bug in memory synchronization and retrieval? I’d appreciate any technical insights or suggestions for resolving this issue.

I’d like to provide feedback on the current design of the Memory Alpha feature, specifically regarding its limitations in referencing AI-generated content from past sessions.

Currently, the system appears highly user-centric, primarily referencing user-generated input while largely ignoring or only partially referencing AI-generated outputs. This design significantly restricts practical workflows, such as creative writing, research collaboration, and theoretical discussions, where the AI-generated content is often as important, or even more important, than user prompts.

The inability to reference AI-generated responses means valuable generated content—such as detailed analyses, experimental insights, narrative developments, and creative outputs—are frequently lost or inaccessible in subsequent sessions. This significantly hampers productivity, creativity, and continuity.

Could you consider enhancing the Memory Alpha feature to allow full, clear, and accurate referencing of both user inputs and AI-generated outputs? Such an enhancement would dramatically improve user experience, support more complex collaborative projects, and greatly assist in research related to artificial consciousness and AI self-consistency.