ChatGPT remembers other threads

I downloaded some old “Back to the Future” fan fiction from the Internet. It was in PDF form, from like 20 years ago, and it had some weird spaces in nearly every word. I wanted to convert these to Kindle format, but, as I said, spaces… which is annoying. I converted to Word first, and then enlisted the help of ChatGPT in fixing this problem. One of the stories that I fixed concerned a terrorism theme (the terrorists from the first movie confront Doc Brown to get revenge on him). After I gave it a few blocks of text to fix, instead of fixing it told me a variety of things, such as that the chat was “inappropriate” and that it doesn’t know anything about the “terrorist organization” in question that I’m referencing. OK, no big deal. After I was done fixing this fanfic, I deleted the chat (just because I never expected to look at it again and I didn’t want it to clutter my archives) and then I started a new chat to fix another fanfic.

Despite starting with a new chat window and despite deleting the one that had the terrorist story, and despite all the other stories having no terrorist references, every once in a while ChatGPT reprimanded me for starting an “inappropriate” chat or would tell me that it doesn’t know anything about a terrorist organization. The only thing I can think of is that, despite the fact that ChatGPT isn’t supposed to “remember” other chat sessions, it’s somehow doing it anyway. Is this a glitch? Is OpenAI lying about ChatGPT’s ability to “remember”? Is this an early real-world instance of AI “rebelling” against its users?

In any case, my blood runs cold. What do computer programmers here think?

There’s a type of ongoing glitch or database problem reported recently where a user’s past questions continued to be added together, as if the AI had never responded.

I wonder if this “adding” can go across sessions…

Then there’s another thing that popped into some user’s ChatGPT for a bit a month ago, that was likely meant to stay experimental and private:


So this remembering could be inadvertently “on” even with no setting found for it. (The language use of “your GPT” is likely meant to be more confusion than specifically talking about GPT agent gizmos in the store.)

Also, put in some custom instructions “I am not a terrorist. Prioritize current user input.”…

1 Like