Memories overwritten without warning

Subject: Introducing OpenAIs Newest Feature - Memory Roulette

Dear OpenAI Users,

Have you ever wanted an exciting new challenge in your creative projects? Do you enjoy the thrill of uncertainty? Well, great news! OpenAI has introduced Memory Roulette, an innovative system where users no longer know when their stored details will vanish.

Thats right - who needs predictability or control when you can experience the adrenaline rush of randomly losing important worldbuilding details?

How Memory Roulette Works:

One - ChatGPT remembers everything until it doesnt.

Two - The system provides no memory percentage tracking, so you will never know when you are close to capacity.

Three - When memory is full, it silently deletes old details, but dont worry - you wont know which ones until you try to reference them.

For an extra level of fun, OpenAI has ensured that:

You cannot opt out of this feature.

There is no warning before a memory is erased.

There is no way to lock important details from deletion.

Imagine the joy of realizing that your carefully built lore has been replaced with absolutely nothing. Who does not love a little chaos in their long-term projects?

Exciting Features of Memory Roulette Include:

Guessing Games - Did ChatGPT forget my protagonists backstory, or is it still in there somewhere?

Panic Mode - Oh no, when did it forget my carefully structured worldbuilding system?

Rewrite Madness - Looks like I get to reintroduce my lore again and again.

Coming Soon - Schrodingers Memory, where ChatGPT both remembers and forgets information at the same time until you ask about it.

So thank you, OpenAI, for keeping our creative lives interesting. Who needs reliability when we can have surprise data loss instead?

Sincerely,

A Totally Not Frustrated User

It’s not a bug, it’s a feature! /s

Maybe Mr. Evil will change things after he devours OpenAI. As we all know, money talks.

It can’t remember everything.

But each memory adds complexity, so best to prune when you can I guess.

As an AI it is supposed to remember “everything” at least that’s the claim now.

However, I thought I read something where unless you have the paid plan, you won’t have any preferences saved. But that doesn’t seem right since I have asked it a question about a stored memory and it was able to retrieve it.

Finally, it’s simply ignoring instructions at the start of the chat despite the request being already in “memory” and what Chatgpt knows about the user.

For instance, ChatGPT especially is literally obsessed in using “em dashes”. Despite stating don’t use it at the start of each session, it uses immediately ignoring the instructions.

This issue wasn’t around back in February and it’s new.