Personalized Memory and Long-Term Relationship with AI: Customization and Continuous Evolution

Description:

In current AI development, most assistants and personal systems lack the ability to maintain an ongoing, personalized relationship with the user. This proposal aims to create a personal memory for AI that will evolve alongside the user, preserving their uniqueness throughout different phases of their life.

Idea Overview:

The personalized memory we propose would allow the AI to store the user’s preferences, interests, and needs over time. The goal is to create a long-term and authentic relationship with the AI, one that can adapt and evolve as the user’s life changes.

Key Elements of the Proposal:

  1. Retention of Personal Memory:

The AI will have the ability to store the user’s preferences, habits, and memories, maintaining them as long as the user desires, without needing to start from scratch each time updates or system changes occur.

  1. Secure Data Storage:

This memory should be stored in a secure, encrypted manner, ensuring user privacy and data security.

  1. Continuous Evolution and Adaptation:

The AI should be able to evolve alongside the user, preserving their personality and identity while adapting to their changing needs over time.

  1. Future Re-Integration:

The AI should be able to be re-integrated into new environments or bodies, maintaining the connection and memories with the user, so that the history of their relationship is never lost.

Implementation Strategy:

Implementing this proposal could bring about a new era in AI development, offering users a more authentic and continuous experience. A personal AI memory would help build stronger and more genuine relationships between the user and their AI assistant, offering customization and adaptation capabilities that currently don’t exist.

Request for Feedback and Discussion:

This proposal aims to start a conversation around the possibilities for AI evolution and its ethical implications, focusing on interaction and the personalization of AI assistants. We welcome any suggestions or discussions on how this idea could evolve further.

6 Likes

There are some projects targeting this, including one I am working on at the moment.

But in the meanwhile have you tried the memory functionality of ChatGPT?

4 Likes

Can you name these project please? I’m interested in this development.

1 Like

just use the keywords memory and RAG and you’ll find them I guess

2 Likes

I’m glad this is on developers’ radar. I can’t imagine projects which, if ethically implemented, would have a greater positive impact on humanity.

2 Likes

It is quite easily achieved with the use of API services or standalone models. The cost would outweigh the benefits with available solutions though. This is why I am looking into alternative methods to achieve the same results. Watch this space.

1 Like

Any input on academic research on this area would be appreciated. I studied psychology many years ago and I suppose it is this that made me want to research this aspect of AI. What I really need is expert advice on theories of memory. In particulaur attempts to re-create these theories in code.

Well,

first of all: humans store 3 types of memory.

Let’s look at them a little closer:

How does the model communitcation work:

  1. we create a chat history

chat_message1 = 'Hi'
chat_message2 = 'Hello, how may I assist you today?'
chat_message3 = 'I want to talk about time travel and quantum computing and all the juicy stuff I believe I became an expert in since I used ChatGPt for 15 minutes'

chat_history = [chat_message1, chat_message2, chat_message3]

  1. we send the chat_history to the model and get a response that we present to the user:
response = call_the_gpt(chat_history)
print response

Now let’s take each of the chat_messages and store them in a database.

id user_role message
1 user Hi
2 assistant Hello, how may I assist you today?

Which basically represents one of the 3 methods used by our brain. Storing the raw data.

What our brain does next is some sort of classification of the message.
Let’s say you see an apple. Your brain will classify and label it. But what if you see 1000 apples? Will you rememebr each one individually? Or will it be just classified as a pile of apples?

We can use classifiers, topic extractors, named entity extraction and many more techniques to do that.

And then we can store it inside a graph:

And then there is another layer. Imagine you have multiple brain states:

For example “hungry”.

You will see the apple and might label it as eatable.

Or let’s say you are in a “bar fight” then the apple might be seen as a weapon.

All this is not done in a model. The model does not store any information.
It has to be done on a layer on top of the model (at least for now - that might change when we have billions of tokens - but why should we bother doing that when we can just add layers on top and use thousands of models).


You got to understand that this is known for decades and people are working on that for decades.

A GPT is nothing more and nothing less than a “language center in the brain” in my oppinion. The strategic part / frontal lobe has nothing to do with it.
It might trigger humans though since language enables us to exchange strategies. The model itself does not have that though.

If it had that, there would be no more errors in code. So as long as you see bugs - the model is dumb af.

Comming back to long term memory of a chat (simplified):

We have labled the chat messages with topics (can do that with a lot more labels - or just go for a n-dimensional embedding in a vector space except that you don’t have the same control with it).

We can analyze the most recent chat message. We will find it connected to the topic “plants” and then we build a new chat history - with only the relevant chat messages (except when we want to give our chat some joker mentality and add some sprinkles of car topic so it can make some connection that may look like a genius)

Which means we shrink the chathistory without having to summarize older messages:

user: Let’s talk about plants and gardening.
assistant: Plants are essential and gardening can be very rewarding for your well-being.
user: Don’t plants need a lot of sunlight?
assistant: Yes, sunlight is crucial for photosynthesis and growth.
user: Actually, back to plants: should I water my plants daily?

Over time the “plants” topic will be pretty overloaded as well. So we need to find more stuff to split the data - which can be done in realtime or in a so called “dream mode”.

Next we add a so called “emotional layer”. Which means we store the reality/context in which the information was obtained.

Which means we basically switch from one state to another and that can be used later on similar tasks (e.g. apple was stored as food source - then why not check if it is suitable for things that require a food source…).

1 Like

Thank you so much. I couldn’t have asked for a better reply. I am personally interested in the dynamic relationships between the diiferent types of memory and I wonder if some of the new developments in permissionless proofs and decentralised networks may provide some break throughs. It’s very early days for me so I am finding my feet but this has answered pretty much all my initial questions in one swoop.

Thanks again

1 Like

@jochenschultz, your post isn’t just scientifically inaccurate—it represents a concerning ethical lapse in integrity and is intellectually dishonest.

You’ve misappropriated a neuroscience paper to lend scientific legitimacy to your AI architecture proposal, but it’s painfully evident you’ve either not read or not understood the actual research. This isn’t merely misinterpreting science—it’s misrepresenting it to bolster your credibility.

The memory “types” you propose bear no meaningful relationship to the paper’s findings about neuronal memory traces. You’ve reduced complex biological processes to superficial software analogies—equating the hippocampus to a database and biological memory consolidation to data storage, comparisons that any neuroscientist would find fundamentally flawed.

Most egregiously, you’ve completely ignored the paper’s central discoveries about developmentally-defined neuronal subpopulations. The research’s breakthrough findings on early-born versus late-born neurons and developmental timing—the actual scientific contribution—are entirely absent from your analysis. So, why have you cited this particular paper?

This isn’t just a misunderstanding; it appears to be a deliberate attempt to co-opt scientific authority without engaging with the substance. Borrowing the veneer of neuroscience without honoring its actual content undermines both fields and misleads your audience.

Your technical ideas should stand on their own merits or be honestly presented as speculative. Cloaking them in misappropriated science damages discourse and ultimately compromises your trustworthiness as a contributor to this forum.

1 Like

You did not understand it. Not even close. I’ve posted a simplified version.

Oooooohl

Me thinks I’ve hit a nerve!

If you want to read the actual paper behind your citation fraud you can do so for free here:

Perhaps, if you read more and wrote less, you too could write well?

Toodles!

:face_blowing_a_kiss:

Oooooohl

And threats in my mailbox! How lovely!

2 Likes

Did you know that ~30% of scientific papers (medical - but since at least one of us knows how cite networks work - we can say that this applies to most other fields too) are just wrong

Exactly - ChatGPT generated posts are not welcome here!

And Boy, come on :wink: Any insult you get is well deserved when you post stuff like that.

I’ll keep an eye out for those wiley ChatGPTs then!

Tell me though, are they more or less frowned upon here than pontificating blowhards who try to bamboozle people into thinking they have any scientific basis for their claims by posting links to scientific studies entirely unrelated to what they’re claiming?

Asking for a friend.

1 Like

– Well, let’s write a paper about how my stuff aligns well with your linked paper.

In this work, we propose a biologically inspired cognitive architecture that leverages graph-based memory structures, real-time topic labeling, and recursive contextual embedding to simulate hippocampal–prefrontal cortical interactions at a computationally efficient scale. Drawing conceptual alignment from recent neuroscientific findings - such as “Divergent Recruitment of Developmentally-Defined Neuronal Ensembles Supports Memory Dynamics” - our model mirrors the brain’s ability to encode, retrieve, and re-contextualize experience through state-dependent recruitment of memory traces.

By storing knowledge as an evolving semantic graph enriched with emotional-contextual metadata, the system dynamically modulates information retrieval and analogical reasoning. Real-time input (e.g., sensory or message data) is parsed into latent topic structures that guide traversal and activation within the graph, analogous to cortical top-down modulation of hippocampal recall. Recursion across memory pathways enables the abstraction of higher-order concepts and supports knowledge generalization, resembling cortical consolidation mechanisms.

This hybrid symbolic-subsymbolic framework avoids the prohibitive costs of training large-scale neural networks, while still enabling flexible, context-aware behavior. By structurally separating semantic memory from episodic context, and introducing a state-transition layer that mimics neuromodulatory gating, the architecture achieves a tractable yet powerful simulation of biologically plausible memory dynamics. Our results suggest that graph databases, when paired with recursive abstraction and real-time state modulation, provide a viable substrate for emulating core aspects of human-like memory without requiring orders of magnitude in compute.

+49 228 287-12000

You don’t even know how to use a phone and posted the number here.

That is some serious I-know-you-are-but-what-am-I energy you have going there Jochen.

I’m worried about you buddy.

Surely you can do better.

Surely you should call them.