Remember everything I've said

I’m new to this, and I’m missing something.

My Goal:
Create a chat bot that remembers everything I’ve ever said to it.

Current State:

  1. uploaded a few hundred lines of “memories” from an existing chatbot, through the API (simple, JSONL formatted, one memory per line)
  2. coding in python, which I am functional at but nothing fancy.

Current Research Effort:
I’ve gone through the fine-tune documentation and everything I read seems to focus on working from a static fine-tune model.
I’ve seen a few threads discussing this, but the answers aren’t clear to me that they are the right ones. Maybe I just haven’t found the right answer, yet.

I don’t want it to have to re-learn the entire history of our conversations each time I start a new session.

Once I have this model fine-tuned and start chatting, how do I add every line of my conversation to the memories, so I can come back tomorrow, restart the app, and continue where I left off without starting from scratch?

Do I just wrap the conversation inputs and outputs in JSONL and add them to a document? I haven’t seen a sure-fire example of that.

Ideal Answer?
An example python script (or link to one) that will solve this problem.

I can’t possibly be the first person who wants to do it, but perhaps the least experienced? (I’ve hacked together programs for 40 years, but never went to school for any of it, so complex algorithms go right over my head)

1 Like

Theoretically, what you are asking for is called online-learning (roughly, learning continuously from inputs when deployed). This is fraught with concerns like “catastrophic forgetting”, where the model’s performance deteriorates precipitously on some input.

I would recommend doing this in this fashion instead:

  1. Finetune with memories.
  2. Use and generate more memories, make them a part of the prompt, till the input buffer length fills up.
  3. Use some external heuristics to subset a good sample (response quality, wherether question was rephrased, whether escalation to human was performed etc., domain specific).
  4. Append to original, prune duplicates and Finetune in a batch process, once a day (or 3, or 5; depends on volume, and constrained by the limits of 10 finetunes a month).

What do you think?


BTW, excellent question.
If only all questions were so clear in what they want to accomplish. :slight_smile:
Welcome to the community.


Thanks! Yes, definitely I don’t want to have to re-learn it either…

I’ll add a specific case:

  1. I tell the chatbot that I like the colour orange.
  2. shut down the app
  3. Two months latter, I restart the app and ask the chatbot what colour I like.
  4. I want it to remember (and tell me) “orange”

I don’t want to give up all the power of Davinci, but add some new permanent memories that are just for me.

I understand, as well, that I will have to start with a lower-power model until I have enough to make it worth of requesting greater.

Thanks for raising some of the potential ramifications. I definitely don’t want to give up the Davinci power (in the long term, since I read that I am limited to the lower-power models without a specific request, which is probably for the better so I don’t waste too many tokens getting to my MVP!)

Does that clarify? I am completely fine for this to only be for one user (me).

1 Like

Thanks for your reply!

I have some learning to do with step 3 & 4. No problem, this is a long term project for me.

My core questions are still the same, and I think maybe it’s because I am missing a core concept:

  1. How do I maintain these memories when I go away and come back 2 months later.
  2. When I fine tune the model with some memories, will it retain them if I delete the file I used to tune it with?

I have a bunch more questions in my head, but I want to just get this basic functionality going enough so I can experiment and deteriorate a few new models until I have a better understanding and find a new wall to climb over :slight_smile:

Hi @sean, welcome.
The best practice that I have found so far is to build a short-term memory based on the prompt - i.e., build a log file and add the last 3-10 interactions within the prompt. You can also use the model to build a ‘summary’ of the previous interactions and add it as a single paragraph.
However, for the “real” long-term memory, I would actually use the semantic search approach and build a separate log file, where you can use to store the important parts of the conversations. Maybe You can fill it with all the summaries of the interactions.
Then, once you make a new interaction, before every reply, you call the semantic search endpoint to search your log file to find the most relevant paragraph there, you add it to your prompt, and you’ll have a contextual behavior.
Note, that it means that there are paragraphs that it will not pick up, so you might consider taking the top 2-3 summarizations in your log file.
Bear in mind that it’s going to be costly so if you can, keep short prompts with only few historical information.
You can use fine-tuning to upload your chat history from time to time and reduce the need for large prompts. It will let you have all the memory under account at once, but not per user…
However, from my initial experience, fine-tuning might have the risk of pushing the later interactions to circle to the memory rather than building new stuff, so take that in mind. You can try to use the “temperature” variable to mitigate that.


Thanks for this answer, it’s helping me find some focus on some concrete next steps.

Semantic Search - check.
Batch uploads of chat history - check.
Temperature variable usage - check.

I think I have some work to do, now!


Besides the excellent approach suggested by @NSY , what I was suggesting was to finetune as a batch process at say, 0300 hrs every day, if there is new data to finetune on.

Reg step 3, if you deem all interactions worth getting into the fine-tuning process, you may not need this step. However, sometimes you have to rephrase your inputs to have the system response in line with the intent. You may wish to train the model only on valid inputs, but that’s your decision to make.



  1. yes, if they get into the model, and you are still using the same fine-tuned model.
  2. Once trained, there is no relationship between your new fine-tuned model and the file used for finetuning. Having said that, you would want to hold on to the file so that you can append the new data and retrain on an ongoing basis.
1 Like

This is a great problem and I’ve detailed how to solve it in my book - coming out soon! The key thing is that you can rely on finetuning to update “memories” periodically. This is like how human brains learn during sleep. Sleep is when human brain “embed” knowledge. However, we have short and medium term memory that allows us to integrate stuff in realtime. You will still need to use databases and search to act in such a manner. Even then, you might not want to rely on finetuning for an AI system as it has a risk for confabulation.

I’ve been working on this line of research– chatbots with long-term memories. I’m working on a paper and want to write one soon.

1 Like

Hey, did you write that paper eventually?