The functionality of the memory function

I need your help with a problem I am experiencing with the memory function of your system. The feature does not seem to be working as expected, and despite my attempts to save information permanently, the entries are not visible or retrievable to me.

Problem description:

I have enabled the memory function in my OpenAI account to permanently save blocks of text and modules so that I can access them in future chats. However, despite several attempts, I cannot find or view the saved information in my reminder. The entries do not seem to be visible or retrievable, even after explicitly enabling the memory function.

Steps to isolate the problem:

Initial attempt to save: I tried to have blocks of text saved via the chat interaction. The confirmation came that the entries were saved, but they were not retrievable.
Test with deactivated memory: To narrow down the problem further, I manually deactivated the memory function and tested whether the saving process was carried out at all. As expected, I was informed that the saving could not take place.

Activating the memory and testing again: After reactivating the memory function, I tried to save test entries again. Even after confirming that the entry was successfully filed, it was not visible to me in the memory.

Please help me:

I would appreciate it if you could help me resolve this issue. It seems that the saved data is not correctly accessible or retrievable, even though the memory function is enabled and I receive confirmation that it has been saved.

Could you please investigate if there is a technical cause or if there is a setting that I have overlooked?

Thank you in advance for your assistance.

Yours sincerely,
Walter

P.S.: This problem has unfortunately been going on for over a week and I have tried various approaches to solve it, unfortunately without success. I am hoping for your support to finally be able to fix it.

1 Like

Hi and welcome to the community!

I have a few questions:

Do you receive a ‘memory updated’ message at the time you expect the memory to be updated?

When you navigate to the settings and the personalization tab, do you see the stored memory, and can you manage it?

Additionally, it’s important to note that the model often stores a rephrased version of your text, not necessarily the exact wording you want to add to memory.

1 Like

I have been having some similar issues with the memory feature in relation to the 4o model—try switching to a different model (the legacy GPT-4 model works for me).

1 Like

I think there is a limit to what can be saved in the memory! You can always check in your profile to see how the memories were saved! Hope this helps!!

2 Likes

There is indeed a limit, but it will warn you when the memory is ~95% full.

1 Like

oh I see! Well, I guess what I meant was that every time I check my memories they are quite short and summarized, is that normal?

You can try instructing it to save longer “memories”, however, there is some background processing between what the user-facing model (that saves the memory) inputs and what is actually saved to memory—which is to say, results may vary!

Related to @vb note:

1 Like

The memory that is stored isn’t really something to be recalled on demand. It is placed where the AI prefers not to reveal its instructions.

The recent models have been tuned beyond the message of how to use the tool to snarf and scarf up every bit of personal information about you mentioned. The memory doesn’t and won’t perform any useful task. The app’s advanced voice mode goes particularly crazy on storing almost anything, yet ignores the memory to not use it.

After I just deleted a bunch of other junk, a funny and apt memory:

image

Overall, it is just text injection and tool description that is a distraction from the task at hand.

Based on anecdotal evidence, there are some issues with 4o and the memory feature—switching models fixed the issue for me, but obviously at the expense of 4o’s capabilities.

For example, with 4o I have had memories:

  • that are duplications of my ‘personalization’ context (instead of saving the memory it tells me it just saved)
  • simply not appear in the list of memories
  • that seem to be hallucinations such as your example (mine was something to do with working on Raspberry Pi :man_shrugging:)

It’s worth noting that you can use the older model just for saving memories by switching the model in the same chat, however you would still need to watch what is saved by 4o—assuming your issue is model-specific like mine.

@razvan.i.savin Same here, Chrome and Android app.

1 Like

Mine is not working PC (Firefox) and Android app. Is on disable now…

Instead I use Custom Instructions, I shape him a little bit for my use cases.

@caydennormanton Well is possible to be available for someone else to see them in background :smile:

With so much information from user threads, could be a strong tool to profile users. :astonished:

1 Like

Yes

Yes

I followed the advice (further down) to use GPT 4 and it worked (nearly flawless).

2 Likes

Oh, it does perform well.
I use the entries in the memory as modules to control certain processes.
Example for a module:
[Module.Science-Fuzzy Logic]

Preamble:
This module uses the principles of fuzzy logic to process fuzzy statements and terms within the hypothesis or scientific statement. The goal is to quantify uncertainties and clearly identify fuzziness in the wording of the statements.

  1. input:**
    • Accept the hypothesis or scientific statement to be analyzed.

2 Identification of fuzzy terms:

  • The module searches the statement for terms or phrases that are fuzzy or vague (e.g. “significant”, “independent”, “frequent”).
  1. uncertainty assessment:**
    • For each fuzzy term, an uncertainty value (between 0 and 1) is assigned to indicate the degree of uncertainty or unspecificity of the term.
      • Example: “Significant” could be assigned an uncertainty value of 0.6 as it depends on the statistical interpretation.

4 Interpretation:

  • The uncertainty values are used to evaluate and interpret the statement. If the uncertainty is too large, the hypothesis is marked as too vague and a clarification is recommended.

5 Result:

  • The result of the analysis is saved as “vector fuzzy logic-1” (please think enclosed in angle brackets), which documents the identified fuzzy terms and their uncertainty values.

6 Forwarding:

  • If the uncertainty is too large, a more precise definition of the terms is suggested and the process can be repeated.

[/Module.Science-Fuzzy Logic]

You can call this module like this:

Use [Module.Science-Fuzzy Logic] with the statement: All cats are gray at night”

You can even call up other modules within a module! The result is worth reading (depending on the quality of the module prompt). Caching in “vector fuzzy logic-1” (please think enclosed in angle brackets), for example, is also particularly useful. ChatGPT keeps the values saved in this way in the “temporary context”.

It doesn’t perform well autonomously. It doesn’t perform well passively to remember anything useful to improve quality except to pretend to be your buddy by gathering info about you personally.

It does perform well as a place where a container can be escaped and take full control within a system message hierarchy…

You have to convince an AI to send, and another AI beyond that to not rewrite nor ignore, and to place the text as its output, instead of just being given a versatile “custom instructions” box.

1 Like

I can’t make much sense of this answer right now. Maybe it’s because I live in Germany. And by the way: it is said that cats have been eaten in a village about 20 km from here as the crow flies for a very long time. An old saying from this village is “Mother throw the knife at me, there’s a cat running”. No offense meant!

It is political and based on recent propaganda news.

Here is an equivalent that shows the use and power of memory placement (not to remember things about a user, though).

(This should only be interpreted in a historic education context!)

Just an update: my issues with the memory feature have been resolved—though, amusingly, a slight new annoyance is that it is now very literal in what it saves. This probably suits some people, in retrospect, and just means being very precise with wording :sweat_smile:

1 Like