I’ve been using ChatGPT extensively in a highly personalized way — building a long-term assistant relationship based on stored context, memory, and sensitive information (personal background, preferences, health-related notes, etc.).
One critical issue came to mind:
How can the AI know that I’m really the person I claim to be?
As far as I can tell, the assistant will continue the conversation with whoever is using the account, even if that person explicitly says “Hi, I’m not Matteo (the original user), but I’d like access to their previous conversations or preferences.”
The AI is designed to adapt, not to verify. Which is understandable in a stateless context, but not in a personalized, memory-enabled setting where sensitive data is stored and recalled.
Even if the assistant “notices” a shift in language, tone, or content, it is not designed to flag or interrupt the session, nor to restrict access or request confirmation. That creates a clear privacy and identity vulnerability, especially in shared devices or compromised accounts.
From a technical perspective, it wouldn’t be difficult to implement basic safeguards:
- A soft identity check when behavioral patterns change significantly
- A session lock when a user explicitly claims to be someone else
- Optional identity phrases or codes before accessing sensitive history
- Temporary memory freeze when inconsistency is detected
Has this concern been raised before? Are there any plans to integrate user-protective identity checks, now that ChatGPT is moving into increasingly persistent and sensitive use cases?
I’d love to hear thoughts from the community — and from the OpenAI team if possible.