Security Risk: GPT gives wrong privacy guarantees to users

Dear readers,

I just used gpt4 for the first time. I requested it to investigate my digital footprint and it immediately told me it would need my personal information. Before continuing, I asked GPT if it would store this information, but it assured me my personal information would be deleted.

After asking a bit further, GPT recommended that I could send a opt-out request to with a template email. This would ensure that the conversation would not be saved and the data would not be used for training purposes, according to GPT.

I just got a response on my email: apparently, it’s nonsense and GPT now has my personal information…

I know this was partially my fault and I should not have trusted GPT. but still, it should not have been this easy for it to give guarantees to users.

Yes, the model hallucinated and mislead you.
There are a few things you will want to do:

  1. Send a request to OpenAI to remove your personal data from the training data, according to this help article.
    The relevant section is

We respond to objection requests and similar rights.

  1. Send a model behavior report via the form at the bottom to inform the safety team about this incidence.

This should help and thanks for sharing this particular story.