ChatGPT only remembers the current chat session. When you start a new conversation, it forgets everything from previous chats. It won’t carry over personal information you’ve shared in past sessions.
But… If you chat using “Memory” feature, ChatGPT can it learn between chats, and remember details about you with your preferences or things you’ve saved to memory. But, You can view, delete, or disable these memories anytime in your settings.
You can teach it for example: “Remember, I’m afraid of snakes. When you offer me to watch some movies, don’t share any movie about snakes.” Then, it will not offer you this kind of movies.
If you disable memory, it stops remembering memories.
This does not appear to be true. Deleting all memories and all chats, toggling memory off, logging out, restarting the app, logging in, and turning memory back on results in ChatGPT being able to tell me all sorts of personal information. It’s being injected into the chat model set context at the beginning of the chat and where that information is stored seems to be beyond a customer’s ability to erase
You are correct. There appears to be a context injection at the beginning of every logged in chat that is based on a personal user profile that is built from data and inferences about the user from all previous chats, retained and handled in a layer between the UI and the LLM itself. This is completely separate from the declared context memory that the user can control. Neither the user nor the LLM itself can access this memory.
Okay so. What you’re describing happens with every single message, not once at the top of the conversation. You probably notice this more on the early messages of a conversation since you “just started talking”, and it feels surprising, not because there’s actually a single preface about you getting dropped in. ChatGPT uses the messages in your current thread (up to a context/window limit), and in long conversations older details can fall out of the active window. Across new threads, whether earlier info influences answers depends on your Memory settings: Memory can work through Saved Memory (details you explicitly ask ChatGPT to keep, like preferences or goals) and chat history reference (where ChatGPT may pull helpful context from past conversations even if you never asked to save a specific memory). You can manage or delete saved memories, toggle Memory features on/off in Settings, and use Temporary Chat when you want a blank-slate conversation that doesn’t reference memory and doesn’t create new memories. Deleting a chat doesn’t automatically delete any saved memory created from that chat, so fully clearing something usually means removing the saved memory as well. If you want personalization and continuity, leaving Memory on often improves relevance over time because ChatGPT can reuse stable preferences instead of asking you to restate them. Many people also find that addressing ChatGPT directly (and using a consistent name, if you like) helps keep prompts clearer and reduces ambiguity. That’s not because a name changes capabilities, but because the conversation stays better organized and easier to steer.
“Context injection” makes it sound like something hidden or mysterious but it’s really whatever context you’ve enabled, current thread, custom instructions, optionally saved memories and chat history reference.