My Girlfriend used confidential Data in Chat Gpt - she used client information and now her Company found out about it

My girlfriend used GPT-4 for her job and inadvertently inserted various sets of personal data in the form of screenshots into the chat. Her company discovered this, and now the forensic department is investigating.

The chat where she shared the data is about five days old. She has already contacted support to request the removal of her data from the learning process.

We are trying to find out after how many days data gets used for training, and whether this includes pictures/screenshots. Additionally, we need to know if deleting her account would exclude her data from being used for training.

If anyone has insights or information on these matters, please share.

Hey, check the profile Data Controls and see if that’s turned off or on. The data is not reviewed by a human except for serious legal issues. I would not be worried about a model being trained on the confidential data, as I am pretty sure all the chats aren’t just used for training as that would be pretty much impossible. I believe it’s mostly used to see which major issues it has trouble answering and for access to see messages you like or dislike.

I hope this helps.

1 Like

Thank you for your quick reply. She turned off the training option in her account settings today, but this was after she had already entered the confidential information.

Would deleting the account ensure that the previously shared data is not included in the learning process?

I don’t believe so. I don’t think you should be worried if the chats are deleted.

1 Like