So OpenAI’s solution to the EU’s privacy laws is... to lock us out entirely?

Let me get this straight. The new memory features, arguably one of the most useful and long-awaited updates to ChatGPT, are being rolled out globally… except to users in the EEA, UK, Switzerland, Norway, Iceland, and Liechtenstein. Why? Because OpenAI’s infrastructure apparently can’t (or won’t) comply with our privacy laws?

Instead of working to meet GDPR requirements, like actual data minimization, meaningful consent, and real deletion, they just blanket-disabled the feature for an entire continent. And they’re still charging us full price.

Let’s call this what it is:
They’re afraid of being held accountable under European law, because they’d have to respect user data, give real deletion options, and let people see what they’re storing about them. Features that should be standard in any ethical AI platform.

And while we’re at it, let’s not pretend deleted chats are truly deleted. Unless users explicitly opt out of training, those chats are still being ingested and used. Even after they disappear from our UI. In the EU, that’s a violation. In the US? A business model.

I get it—compliance is hard. But right now it looks like OpenAI would rather shut the door on millions of users than figure out how to do the right thing legally. And worse? They’re doing it silently, with vague “stay tuned” messages and no clear roadmap for when or even if this will change.

It’s no wonder users are seriously looking at alternatives like Claude, Gemini, Grok or local LLMs. If OpenAI won’t provide the features we’re paying for—and can’t be transparent about why—what incentive do we have to stay?

You can do better than this. But you need to actually try.

Italy fines OpenAI over ChatGPT privacy rules breach | Reuters