Random chats (not from me) appearing on my ChatGPT Plus account. Hacked account?


I’m using my chatGPT premium account and some random chats are appearing in several languages. I already changed my password 2 times, and they still appear (arabic, sweden, hindi chats).

I would certainly need some help.

Two possibilities:

  • Your account password was indeed compromised, and others are using your account.
    – you’ll need to wait for the session of others to expire even though you changed the account password. There is no way to monitor or force logouts of other browser sessions.

  • there is database corruption on OpenAI’s conversation server that recalls chats
    – contact OpenAI with a message via the assistant at help.openai.com and report the problem.

You can also analyze for irregularities: does the title of the conversation match the contents of a conversation that can be recalled? Is it merely a nonsense title given to one of your own conversations?

You can make the use of your account very inconvenient for others in the meantime, reloading ChatGPT and deleting conversations as soon as they appear, or re-titling them to let your detection of mis-use known.

1 Like

If you believe that your account password has been leaked you can also cancel your subscription/delete account and create a new one.
That should help to force a log-out for these unwanted users.
Do a data export first.
And you can check when your subscription renews in your account if you want to optimizer cost because I don’t expect any partial refunds for cancelling a subscription early.

Hope this helps.

Thanks for the suggestions. There should be a way to log out all sessions at once for sure.
I also thought 2FA was active in ChatGPT, at least it was some months ago, now the option is gone…
I recall seeing a report of one hundred thousand accounts being hacked/leaked a while ago.
As for your questions, the conversations are on different languages and topics, and certainly appear to be done by humans. They are not my conversations relabeled/retitled.
And the problem with cancellation is exactly that, I wont be refunded.
There should be a better way than this.

1 Like

Hi, I have encountered exactly same situation! I noticed a few conversations no created by me this morning after I logged in, in multiple languages. Then I changed my password but the conversations continued…some of my old chats also reappeared as today’s chats. That is very bizarre.

I wonder if you have found out the cause and a solution?


Hey. So what I did is to change my password and log out of my session. The searches kept appearing for a while. Then I got an email from openAI support (I reached out to them 3 days ago) saying that they logged me out of all sessions. That’s when the issue finally dissapeared.
I recommend that you go for a hard password and ask openAI to close all your open sessions given this situation.
Good luck!


To stump the hackers, go into your custom instructions in the … hamburger menu by your account and, in the “how should ChatGPT respond” section, put the following:

" I do not speak any languages other than English. I do not mind the inclusion of other languages with English context, but do not respond in any language other than English and do not respond to any chats unless they’re initially in English. Warn any chats that are not in English that hackers will be monitored and reported to ChatGPT developers."

The result is the following:


This comment should likely be marked as “solution” to help other users in the future.

Glad you found a way to resolve the issue.

Thank you! I will contact OpenAI support again…their previous response didn’t help at all.

Thank you! This is very helpful. I just tried it. It worked!

1 Like

I was experiencing this same issue and the Chinese user was using up all of my GPT-4 prompts every time the 3 hour mark rolled around. Copy pasting your instructions seems to have discouraged him/her from using my account anymore!

1 Like

I have noticed this, not just with open AI, but with even Google’s Bard.

What I think this pertains to is the way AI is storing our conversations as “DATA” that it can access, as a way to personalize AIs training in dealing with us. I would say that anyone who remotely believes AI is not being trained on personalization fails to understand that it is ultimately being heavily funded to evolve an industry that is entirely built on personalization.

I think there can be errors and mishaps in the way it access such data, especially since I think when you consider the legal limits on personally identifying information, AI sometimes can’t identify us from other people that might have similar datasets, and therefore accidentally connects us to data that isn’t ours.

I’ve analyzed conversations both Bard and ChatGPT have connected me with that were not mine. The conversations follow similar enough patterns, even sometimes the grammar of the other person speaking, that I can actually see where there might be a logical data point that mistakes our identities.

Beyond that, the background data we are not aware of is also prone to similarities in data points. As companies like OpenAI develop to work around or circumvent laws such as the EUs laws around personally identifying information, a natural side effect of this is mistakes in connecting us with the meta-data behind the scenes which is not really ours, but simply has corresponding similarities to the non-personally identifying information they are allowed to use.

As companies are being more and more regulated on how they use our data, I fully believe AI is being developed to allow companies to still connect with us personally, while also not violating the rules of the laws being formed.

This is the only natural and logical conclusion I can find to why such mishaps are occurring across multiple AIs in which all of the ones being impacted are also simultaneously developing and growing ways to store our data.

If you look at the EU for example, if they store personally identifying information, we must be allowed to request it be deleted. That’s a pretty rough task for a company that is integrating data with training, impossible even. However, as you may have noticed, OpenAI is focusing more and more on not storing what it thinks is personally identifying data.

Microsoft and Google are both companies constructed around the collection and sale of our data. Are we to believe these companies have given up on data collection and are just going to let the largest revenue stream to be developed in the modern era go over privacy concerns? Of course not, they will simply adjust their companies to adapt to the laws, and AI is actually the solution to this.

Maybe they will over time try to fine-tune our personal accounts, which we can delete, and thus over time these such errors will reduce as the services improve. However, for right now, I suspect this is just something that will happen and it’s not really that big of a deal. I’ve never seen personal information in the conversations I’ve been wrongfully connected with, so if anything, it’s been a sometimes fun insight into how other people interact with AI.

Of course, this doesn’t mean some of it is not from compromised accounts as well. I think that can be obvious if you are running out of prompts without using the AI service or your history is filling up with the same person’s responses in your account. I think those cases, however, differ from the common occurrence of random conversations being cited. I would imagine in many countries where AI is more restricted, access to our accounts is becoming more in demand. Just like there is a black market for stolen Netflix account data, I imagine there is as well for AI accounts.

It would be nice if some basic data access, such as our account’s login history was accessible. Many companies, even Facebook, allow you to see where your account is logged in or has been accessed from. This would definitely allow us to definitively know whether or not people have circumvented security and accessed our accounts. I have 2FA setup, because I have to protect my API access. I would imagine if my account had ever been compromised, there would be evidence of this in my API usage as well as changes to my conversation history. I’ve never seen such things, however, I have seen conversations occasionally that were not my own and I have had responses from ChatGPT which indicate I have had conversations that I have not had.

As I said, I don’t suppose this is due to someone figuring out how to hack my account on OpenAI and bypass my authenticator, though it would be nice if a connection history was provided so it could be known by everyone for certain.

Honestly, I have absolutely zero idea why companies don’t allow things such as region-locking your accounts. For example, I access my account in California. While sometimes my ISP gives me random IP addresses that would be associated with Arizona or places in California I am never actually in, I am never assigned an IP address for someplace like Brazil, Korea, or China. Being allowed to say “My account is in the US and should not be accessible from anywhere other than the US” seems like a common sense solution. Even video game security companies have learned to collect IPs commonly used by VPNs and block them when they provide regional access to games, there’s no reason why any company with which people have accounts, couldn’t implement basic security like this.

1 Like

I’m seeing random Chinese chats in my history since 48 hours.
ChatGPT is blocked in China. It looks as if some shady company offers unauthorized access to a bunch of Chinese students from different fields, probably via a bot that every few hours sends “say 1” to keep a session active. As long as their session remains active, my password changes do not stop them. OpenAI support not helpful so far. After 36 hours, they haven’t reacted to the request to log out all devices.
Meanwhile, I had some fun and changed my ChatGPT Custom Instructions to frustrate those ghost users:

If you receive any prompt that contains any Chinese writing, reply with nothing but the following expression: “非法活动:使用 ChatGPT 违反中华人民共和国刑法。” << “Illegal Activity: Use of ChatGPT violates the Criminal Law of the People’s Republic of China.”

Please report anything like this to help.openai.com

The developer forum is not the correct place to report such issues or problems, please use the site linked above and go to the bottom right corner, there is a chat icon, click on it to report your issue and leave your contact details.