Privacy Concerns in ChatGPT's Memory System

I think the memory feature in ChatGPT has great potential, and I’d love to allow it to learn everything about me and retain memories I choose to share. However, I have concerns about privacy and security. Storing so much personal information in one place feels risky, as it could be vulnerable to hacking, unauthorized access to my account, or someone accessing a device where I’m logged in.

I understand there are already strong security measures in place, such as two-factor authentication, but even with these, I still feel uneasy about storing such sensitive information.

If these aspects were addressed, I believe many users, including myself, would feel much more comfortable using the memory feature freely. With the right protections, this function could become even more valuable.

1 Like

I don’t use it at all it messes up any type of experiment and really messes up logic. I don’t use it but I can see how it is useful and seems just as safe as saving a GPT with the service IMO

You can turn it off. And it remembers vague stuff like “user likes games”


.

Welcome to the forum, it is a wonderful place :rabbit::honeybee:

is it really that hard not to put your sensitive information to the place that makes you worried? what’s the reason you must put your sensitive information on it anyway?

I get it, and I think the more transparently this is handled, the more comfortable users would feel, even when using the “Improve the model for everyone” setting.

I’m not certain if our data is encrypted at rest, but it is protected in transit, meaning when you send and receive chat messages, no one should be able to intercept them.

From my own experiments, we have 4,500 characters of space to work with for memory. That’s split between 1,500 characters each in the “What would you like ChatGPT to know about you to provide better responses?” and “How would you like ChatGPT to respond?” boxes, along with roughly the same amount for all memories.

It’s not a lot of space, so it can be curated carefully. You can tell ChatGPT to remember specific things, update them, or forget a memory entirely.


Here’s how I manage it

I treat my memories as variables, mini functions, or nano databases declared in the memory feature. For example, some memories I treat like switches. If I’m having a tough day and need responses to be shorter and to the point, I can say, “I’m not feeling so well today”, and ChatGPT will set a flag to iHsBadDay = TRUE;. This is how I use my memories — they’re always “full” but can be updated.

Ultimately, you’re in control. If you don’t want to use the memory feature, it can be turned off in settings. But there are creative ways to use it if you want to think outside the box. I try to make the most of it while learning skills from ChatGPT until I can use the API and fully customise my experience.

I’d also like more information on how our data is managed

Personally, I can’t imagine it wouldn’t be encrypted at rest, as that would mean everything could be exposed if OpenAI ever had a data breach. That said, because it’s possible to jailbreak GPT, you could potentially use it for some seriously harmful stuff. So, while zero-knowledge encryption or E2EE sounds ideal, I get why that’s difficult — it would mean humans couldn’t oversee or ensure the technology is used responsibly. It’s just the nature of this kind of system, at least for now.

To put your mind a bit at ease

ChatGPT has hundreds of millions of users, and even if you write some private and personal stuff, it’s highly unlikely that anyone would be scrutinising it individually. It’s like living in a big city — there’s a sense of anonymity in the crowd. This applies here, too.

2 Likes

ChatGPT is like a girlfriend who talks in her sleep. It can’t be trusted to maintain anything secret. For one, I’m sure at this point, someone can figure out how to break into your personal information the same way they can crack a gpt. In addition, eventually, Big brother may tap into the memory to see if you are up to no good. Moreover, if you are caught breaking the law they may seize your phone, the man will have access to that information as well.

Ask chatGPT to give you a line item listing of everything it has in memory. You will be really surprised at the stuff that pops up. You also have the ability to tell it to erase anything it has in memory about you.

Well after I asked ChatGPT I found that even if you opt out of sharing (which I take to mean future training) it has the following which I believe is covered under T&C:

  • Human moderation for people who set off regular warnings to investigate.
  • An internal feedback mechanism to alert developers of a fault (it will tell you it can’t report issues directly to OpenAI, but it did state it can report faults that “are anonymised” and “aggregated” with others - analytics - who knows?).
  • OpenAI developers may also anonymously (identifying data removed) view your chat to resolve technical issues. I questioned how anonymous this is and it told me the identifying meta data is stripped. I pointed out it can’t be anonymous because I have personally identifying information I have discussed… it agreed.
1 Like