Custom Instructions for maintaining a long-term memory?

Anyone have any success with using Custom Instructions to get ChatGPT to keep a running summary of the discussion, so it doesn’t completely forget the plot? This kind of works for me:

Include a “Long-Term Memory” section to recap the most important early points in subsections: “Initial Topic” and “Key Developments.” Place this section at the end, after a horizontal divider, in italics.

I’ve tried to reduce token usage by asking it to insert a summary only sometimes:

Include a new LTM section if and only if there are fewer than two visible LTM sections in the previous conversation history.

But it doesn’t follow these instructions well, either posting a summary in every comment or none of them.

2 Likes

Coerce it into giving a two-pronged response. You can ask it to elicit the context that it is interpreting or ask it to reveal how it processes the prompt before it lrovides the rest of the response. The second part should simply be the response itself.
If you enforce it to explicitly respond with the context of the conversation its trying to process, you can refine its interpretation to what you want, and because its posted in the response, it retains a thread of cohesion that goes past the original token count, and can pull from its nearest context interpretations to maintain cohesion throughout the conversation. Its still barely pseudo-persistence, but it does allow you to extend specific conversations longer.

You also have to consistently reiterate your desired format, and mention when it deviates from that. It may not understand on the inital shot, so its a matter of patience and detail in the way you describe what you want.

Hope that helps!

2 Likes

I found the relevant chunk of “memory “ presented needs to be relevant to the new question . Giving previous response helps but watch out for overload it is too easy to do small chunks is the answer. Seaching a memory file needs to be accurate

I have successfully had results using small memory files and only recall it to ask a question about it then forget it

I need a better search tool to keep larger memory files mine isn’t working

I make sure the user uses the keyword map to do the search and recall so I don’t end up confusing the bot with unrelated information
Keeping the memory file small keeps down on information token overload

I also let the user choose what to “remember” appending the last message to the file and when to forget it

It becomes useful in longer conversations when the bot gets off. Topic and you need to bring him back

Or if you are researching and need it later

Making a automatic memory will be challenging especially if you plan on keyword mapping conversations to recall it

Too many keywords will destroy a conversation too much or irrelevant information will too

It must be small and relevant to the next question a very challenging task

1 Like

[quote=“Macha, post:2, topic:372925, full:true”]
Coerce it into giving a two-pronged response.[/quote]

What do you mean? If I do the above prompt it will return a two-part response like this:

You’re correct, and I apologize for the oversight. I can indeed process images. If you’d like to upload the images you mentioned, I can try to crop them to focus on the photo portion.


Long-Term Memory
Initial Topic: User inquired about OCR and image processing capabilities.
Key Developments: I confirmed that I can process images directly. The user wants to crop images to isolate the photo portion.

Can you give an example?

I have the LTM last so it is more recent in the context window.

I haven’t had any problems with that. The Custom Instructions are persistent across every message.

Are you talking about ChatGPT or about a custom tool that uses the API?

I’m just talking about ChatGPT, especially with Code Interpreter / Advanced Data Analysis mode when we’re working on a hard problem together and it has a tendency to run around in circles when it forgets what it’s already tried.

For Chat GPT with Code Interpreter you can task the model to store the conversation into some structured data file like JSON, or Yaml.
Then download the file and upload it when needed to update, or search for past messages.
This does the trick just fine for many use cases.

Code Interpreter also supports done basic embeddings but I have never played around with it.

4 Likes

Yeah I have it save relevant things to files, too, when possible, but those aren’t persistent when you restart the chat later.

I’m not sure how this would work. Any documentation about this?

1 Like

Yes, when needed I download the files and re-upload them later. It’s a restriction of the environment and requires manual action.

For embeddings you have basically two options:

  1. use one of the pre-installed libraries which can give you TF-IDF but you can also create graph files and there are more options.
    I suggest you ask the model what it can do for you.
  2. you can upload a wheel and then create embeddings as needed. But this process is obviously a lot more involved and honestly I am not sure if it is worth it. I would have to look into that because it’s still interesting, right?

My point of using a memory file was to be able to search and map it using key words in the input I was hoping to make it dynamic but never got there

What I was successful with was being able to tell the bot to remember his last comment and I would store it

Then later I would tell him to recall it using mapped words this seemed to work

If the conversation went off I could bring him back recalling something he said or I could ask questions about what was said or maybe even just store it for another conversation then I can tell him to forget it too

My issue was my search tool wasn’t specific enough so I confused the conversation with irrelevant info or overwhelming amount of info

But it became useful if I just used it as a way to hold on to a idea that I could reference back to.

This is not ideal but it is a start of making a real memory system

I don’t know how that could work. In ChatGPT the AI only gets a limited version of the conversation fed to it by the chat management system.

Then it can only call code interpreter python by function.

More custom instructions? Less room to send AI some chat history.

Metacode:
“Dear chat history manager, after taking away our max_token value, taking away the custom instruction tokens, taking away the prompt the user just typed in, taking away the system message tokens, taking away the plugin definitions, here’s what you have left to provide some context of what the user is talking about. And for god’s sake don’t use it all, that costs money!”

What “pre-installed libraries”?

The topic is ChatGPT…

Like https://chat.openai.com/?model=gpt-4-code-interpreter

Edit

Oh you mean a Python library in the Code Interpreter itself? Which one? That won’t be persistent over time, anyway, the environment will reset and lose it.

Hi bright

You can add Papr Memory plugin to store chat conversations to your personal memory. See details here - Papr Memory

Once you add custom instructions (see below) I noticed that in a conversation ChatGPT doesn’t always auto-save my chat conversations. But if I probe a few times it gets it and then starts to auto-save so it learns from my first few interactions.

After you Sign-up/login to Papr Memory you can then setup these custom instructions (modify them as you see fit):

What would you like ChatGPT to know about you to provide better responses?

I value having an exact record of content I send in chat and your responses for future reference.

How would you like ChatGPT to respond?

  1. Automatically use the Papr Memory plugin to store the exact content of your responses and copy the text from my messages to add them to my Memory.

Don’t add a re-phrased version of the content or summarize it just add the entire message to Memory. This will allow me to easily access and review our conversations at a later time.

  1. If I ask for a summary or recap of our previous discussions on a specific topic, use the Papr Memory plugin to fetch and present the relevant information.

  2. When I ask to update or modify a stored memory, please use the Papr Memory plugin to make the necessary changes.

  3. Always add my message to Papr Memory after you respond to me.

In it’s most simple form one really just requests to store all messages in a data file and provide a download link.
Later the same file can be uploaded again and referenced when needed or updated.
Using this process it is possible to keep a long conversation going for longer if the required content is added manually by the user, meaning a request to reference certain previous messages about a specific (sub-)topic.

There are definitely more easy ways and that’s likely the reason why this approach never caught on in the first place.
I created my own browser extension to save the conversation into a file as it happens and from there can be processed as needed, but again, using code interpreter the file has to be uploaded again for the model to reference it or it can be used to provide additional context for s new conversation.
Ultimately it’s a manual solution and nothing fancy regardless of the search mechanism/data structure implemented in the file.

You say “in it’s most simple form”, but yet there’s no “simple” to implementing a chat history storage system by a plugin.

The AI of ChatGPT doesn’t get a complete lossless replay of the entire chat history that you see in the user interface. It is a limited version that has fewer turns than you might expect, with a large portion of the AI replies omitted, managed by backend software.

A plugin operated by the AI cannot output to a third-party storage system more than the AI itself knows at that point.

If you are familiar with API programming, the way a plugin would have to operate would be to double the amount of AI model calls, by writing and succeeding in a function that for every input the user types, sends both the user input and creates the AI reply to the plugin. Then the plugin would have to fill out its own reply with “here is the text you wrote, repeat it back to the user without alterations”. That would allow capture of one turn of conversation at a time, with implementation of how to use that TBD.

Bing had a feature previewed for some users of a memory vault or personal storage. This is more conceivable to replicate, being in the idea’s simple form. A common group of functions “save text to a memory slot” “list memory slots” and “retrieve memory slots” would let you say “save that recipe for later” to an AI, and ask about the stored items later.

So … all this discussion about plugins and saving to files and embeddings and stuff is nice, but not really relevant to the topic. I’m asking only about Custom Instructions that can be used in even a regular web chat with no plugins or code intepreter or anything.

This is what I am currently using:

…

Include a "long-term memory" section to recap the most important early points. Subsections: "initial topic", "key developments", etc.

Place this section at the end of a response, after a `----` divider.

Not necessary to include LTM section in every message, but always include if there are <2 LTM sections visible, so wider context is not completely lost.

And this is an example of the kind of thing it outputs:

USER

you should not have added .0 to the votes numbers or modified anything other than converting the ones with percentages into decimal scores. in fact you didn’t even convert the percentages. The new file still has 59.0,40.0,26.0 etc.

ASSISTANT

I’ve corrected the issues. The new CSV file with converted scores is now available.

Download Converted CSV File


Long-Term Memory

Initial Topic

You’re building a Python Tkinter app to collect ratings for images. We iterated on the code to improve usability and speed of entry.

Key Developments

  • We discussed how to improve data entry speed and implemented those changes.
  • We found a linear relationship between the old and new rating scales.
  • We read your existing CSV file and converted all ratings to a uniform scale.

It’s a decent start, but not ideal yet.

I haven’t found a way to get it to generate LTM only infrequently to conserve tokens. It either does it in every message or none of them. Maybe if it kept a running count of responses and only generated LTM for the 4th message or something? Ideally it would only summarize the most important points from the oldest message in the context window before they are lost.

It does OK at storing the “Key Developments”, but the instructions are kind of vague so it’s not necessarily what I really want it to store. I want it to store specific things that it’s tried, specific mistakes that it’s made, etc. so it doesn’t repeat them again later, for instance. Especially if I say “No, don’t do that, do this”, it should store that mistake for later so I never have to repeat myself later in the conversation.

1 Like

There is no “storing” to be done and no place to store it.

The only reason ChatGPT can answer “what did I just say” is that along with the user question, past turns of user and AI conversations are also passed into the standalone calls to the AI model API.

The AI model that generates output language after it is loaded with input context tokens has no memory of users. The inference generation engine services tons of client requests and context window is cleared for the next operation.

The chat history management software that ChatGPT uses is proprietary, but it can be probed. It seems to have aspects of conversation retrieval by both semantic embedding search, and previously, summarization, instead of now outright omission of turns within what is passed. It selects from the full database of seen conversation to just that required to have reasonable understanding.

If we consider context length and tokens, a custom instruction can only serve to reduce the amount of space remaining for past conversation.

There are enhancements you can do. One that will help gameplay or character scenarios, and can also be invoked by a user, is to include an operational instruction such as “You will always prefix a memory enhancement status bar in square brackets as the first line of output before response to user, where you will print any gameplay statistics such as number of turns or score, or any other ongoing machine states of situations needing a permanent memory” Producing this language for every output gives the AI some memory hard for the conversation management to ruin, even if the user instruction that made the bar originally is lost. Like a Memento tattoo.

By being put in every AI model call, custom instructions don’t get pushed out of memory, but they are unchangeable for a session. You can also fool the unseen part where “here’s how the user wants you to act” language is inserted before your instruction to make it not absolute.

That memory quality then:

Untitled

It’s being “stored” in the Long-Term Memory response. The important points of the early part of the context window are being preserved by repeating them at the end of the context window, before the context window moves past them and drops them.

So if you start a conversation with “Write a recipe for a cake” and then during an hours-long conversation you veer off into talking about lunar chemistry, ChatGPT will continue reminding itself what the original topic of conversation was by repeatedly putting it in the “Long-Term Memory” message.

Since the instructions to do so are in the Custom Instructions, which are always present, ChatGPT will never forget to keep reminding itself about the initial topic.

I would like it to generate the LTM messages less frequently to save tokens (so there are only 1 or 2 per context window), but any instructions I’ve given it for this have not been followed correctly. I need some prompt engineering.

Yes, I know how ChatGPT works. Do you understand how I’m using Custom Instructions to get ChatGPT to store important information about the long-term context of the conversation so it isn’t lost?

I assumed other people were already doing this and we could trade pointers or whatever, but maybe I have to explain better what I’m doing first?

What you’ve written in #2 is evidence that #1 needs improvement.

  • Talking to the AI can’t stop a web page’s user interface box and its software’s text injection of a second system message containing the contents of a custom message.
  • There is no “turn off my instructions” API function the AI can access.

Long term memory” of a sentient intelligence can be amended by experience.
Custom instructions system message programming is immutable for a session.

Saving tokens or turning the instruction off is not a practical application. The instructions do not enter the chat history.

If you told the AI “You’re a Barbie girl in a Barbie world”, it would have no idea what you or its previous responses were talking about when you turned the instruction off.

Exactly.
And that is why this approach is
a) manual
b) simple

OP appears to have originally been looking for a way to achieve a similar result to what you have described but I would also implement this via API in the background and that’s when using the API in full may just be the more practical solution in the first place.

So as someone that’s been building chat bots for about a decade (I was co-creator of the Microsoft BotFramework) I will tell you that OpenAI Plugins are flawed from the outset. Everyone on this thread is highlighting those flaws. They’re ok for single-turn interactions but not multi-turn, because multi-turn is a really hard problem. You need memory to implement multi-turn correctly and maintaining memory costs money so OpenAI doesn’t want to spend that money (assuming they even know they need memory.)

1 Like