I need your help with a problem I am experiencing with the memory function of your system. The feature does not seem to be working as expected, and despite my attempts to save information permanently, the entries are not visible or retrievable to me.
Problem description:
I have enabled the memory function in my OpenAI account to permanently save blocks of text and modules so that I can access them in future chats. However, despite several attempts, I cannot find or view the saved information in my reminder. The entries do not seem to be visible or retrievable, even after explicitly enabling the memory function.
Steps to isolate the problem:
Initial attempt to save: I tried to have blocks of text saved via the chat interaction. The confirmation came that the entries were saved, but they were not retrievable.
Test with deactivated memory: To narrow down the problem further, I manually deactivated the memory function and tested whether the saving process was carried out at all. As expected, I was informed that the saving could not take place.
Activating the memory and testing again: After reactivating the memory function, I tried to save test entries again. Even after confirming that the entry was successfully filed, it was not visible to me in the memory.
Please help me:
I would appreciate it if you could help me resolve this issue. It seems that the saved data is not correctly accessible or retrievable, even though the memory function is enabled and I receive confirmation that it has been saved.
Could you please investigate if there is a technical cause or if there is a setting that I have overlooked?
Thank you in advance for your assistance.
Yours sincerely,
Walter
P.S.: This problem has unfortunately been going on for over a week and I have tried various approaches to solve it, unfortunately without success. I am hoping for your support to finally be able to fix it.
Do you receive a âmemory updatedâ message at the time you expect the memory to be updated?
When you navigate to the settings and the personalization tab, do you see the stored memory, and can you manage it?
Additionally, itâs important to note that the model often stores a rephrased version of your text, not necessarily the exact wording you want to add to memory.
I have been having some similar issues with the memory feature in relation to the 4o modelâtry switching to a different model (the legacy GPT-4 model works for me).
I think there is a limit to what can be saved in the memory! You can always check in your profile to see how the memories were saved! Hope this helps!!
You can try instructing it to save longer âmemoriesâ, however, there is some background processing between what the user-facing model (that saves the memory) inputs and what is actually saved to memoryâwhich is to say, results may vary!
The memory that is stored isnât really something to be recalled on demand. It is placed where the AI prefers not to reveal its instructions.
The recent models have been tuned beyond the message of how to use the tool to snarf and scarf up every bit of personal information about you mentioned. The memory doesnât and wonât perform any useful task. The appâs advanced voice mode goes particularly crazy on storing almost anything, yet ignores the memory to not use it.
After I just deleted a bunch of other junk, a funny and apt memory:
Overall, it is just text injection and tool description that is a distraction from the task at hand.
Based on anecdotal evidence, there are some issues with 4o and the memory featureâswitching models fixed the issue for me, but obviously at the expense of 4oâs capabilities.
For example, with 4o I have had memories:
that are duplications of my âpersonalizationâ context (instead of saving the memory it tells me it just saved)
simply not appear in the list of memories
that seem to be hallucinations such as your example (mine was something to do with working on Raspberry Pi )
Itâs worth noting that you can use the older model just for saving memories by switching the model in the same chat, however you would still need to watch what is saved by 4oâassuming your issue is model-specific like mine.
Oh, it does perform well.
I use the entries in the memory as modules to control certain processes.
Example for a module:
[Module.Science-Fuzzy Logic]
Preamble:
This module uses the principles of fuzzy logic to process fuzzy statements and terms within the hypothesis or scientific statement. The goal is to quantify uncertainties and clearly identify fuzziness in the wording of the statements.
input:**
Accept the hypothesis or scientific statement to be analyzed.
2 Identification of fuzzy terms:
The module searches the statement for terms or phrases that are fuzzy or vague (e.g. âsignificantâ, âindependentâ, âfrequentâ).
uncertainty assessment:**
For each fuzzy term, an uncertainty value (between 0 and 1) is assigned to indicate the degree of uncertainty or unspecificity of the term.
Example: âSignificantâ could be assigned an uncertainty value of 0.6 as it depends on the statistical interpretation.
4 Interpretation:
The uncertainty values are used to evaluate and interpret the statement. If the uncertainty is too large, the hypothesis is marked as too vague and a clarification is recommended.
5 Result:
The result of the analysis is saved as âvector fuzzy logic-1â (please think enclosed in angle brackets), which documents the identified fuzzy terms and their uncertainty values.
6 Forwarding:
If the uncertainty is too large, a more precise definition of the terms is suggested and the process can be repeated.
[/Module.Science-Fuzzy Logic]
You can call this module like this:
Use [Module.Science-Fuzzy Logic] with the statement: All cats are gray at nightâ
You can even call up other modules within a module! The result is worth reading (depending on the quality of the module prompt). Caching in âvector fuzzy logic-1â (please think enclosed in angle brackets), for example, is also particularly useful. ChatGPT keeps the values saved in this way in the âtemporary contextâ.
It doesnât perform well autonomously. It doesnât perform well passively to remember anything useful to improve quality except to pretend to be your buddy by gathering info about you personally.
It does perform well as a place where a container can be escaped and take full control within a system message hierarchyâŚ
You have to convince an AI to send, and another AI beyond that to not rewrite nor ignore, and to place the text as its output, instead of just being given a versatile âcustom instructionsâ box.
I canât make much sense of this answer right now. Maybe itâs because I live in Germany. And by the way: it is said that cats have been eaten in a village about 20 km from here as the crow flies for a very long time. An old saying from this village is âMother throw the knife at me, thereâs a cat runningâ. No offense meant!
Just an update: my issues with the memory feature have been resolvedâthough, amusingly, a slight new annoyance is that it is now very literal in what it saves. This probably suits some people, in retrospect, and just means being very precise with wording