count me in as a company that will discontinue use of this tool as they refuse to address this issue… I will maintain this as the last month that i use this service as I MANUALLY copy and past hundreds of threads into my own database…
Same here. We need to audit what our users are doing in the tool but without an export in the team plan we are facing a wall.
You don’t have the right to look into your users data. You can look into your own data.
LOL. Ah software vendors.
You would need a Data Processing Agreement - did you make one with OpenAI?
Are you a vendor of OpenAI?
As an employeer you are not even allowed to read emails of your employees (without a strong reason). A chat is something very different from that. It may even include chats about psychological state of the employees e.g. if they use it as a “companion” which is very tempting. So even if there are no specific laws for that yet, I would love to see you go to a court and tell it that it is a violation of privacy protection laws that you can’t export your employees chats.
lol
I would even say if you are located in Germany and you ask for the data that could be counted as a violation of “§ 202a StGB” and you should go to jail for up to 3 years because of your ruthless post.
Even the attempt is punishable under § 202a (4) StGB
However - I used the research tool on that and it said this could be a practical way to get access to the data nonetheless:
For companies in the EU deploying OpenAI’s Team accounts, the following practical steps and considerations can help ensure GDPR compliance:
- Sign a DPA with OpenAI: Execute OpenAI’s Data Processing Addendum for the Team account
. This legally formalizes OpenAI’s role as a processor and the company’s rights. It is a prerequisite for lawful processing of personal data through the service. Keep a copy of the signed DPA and ensure it’s referenced in your records of processing.
- Update Internal Privacy Documentation: Include the use of ChatGPT in the company’s privacy notice or internal HR policy. Describe what personal data might be processed (e.g. chat content, which could include personal data if entered), the purposes (e.g. assisting employees in their tasks, and that data may be reviewed for compliance or support), and the rights employees have. If the company allows some personal use of ChatGPT, clarify how those chats are handled or whether employees should avoid personal data input. Transparency now can prevent disputes later.
- Define Acceptable Use and Train Employees: Create guidelines for employees on using ChatGPT. This should cover data protection pointers like not inputting sensitive personal data or confidential client data unless permitted, given it will be processed externally. Employees should know that their chats on the Team account are not private from the company – i.e., that the company may access them when necessary and that data is stored on OpenAI’s servers. Simultaneously, reassure that the company will not arbitrarily snoop on them, only for defined reasons in line with policy (this assurance can encourage proper use and trust). Training or awareness sessions can help reinforce these expectations and the importance of compliance (both to avoid personal data misuse and to ensure they follow security protocols).
- Limit Personal Data and Sensitive Data Input: From a data minimization perspective, instruct users to avoid entering personal data that is not needed for the task. For example, if using ChatGPT to draft code or do research, there is usually no need to include someone’s full name or other identifiers in the prompt. The less personal data processed, the fewer GDPR concerns. Some companies implement a filtering layer or check for prompts to block accidental input of things like patient data or customer PIIs. While OpenAI doesn’t use Team data for training, the company should still guard against leaking sensitive personal information in prompts or outputs, as that could raise data protection and confidentiality issues.
- Establish a Procedure for Data Access/Export Requests: Since the Team plan doesn’t have a one-click export, designate how your organization will handle an employee data subject request involving ChatGPT data. This might involve: contacting OpenAI support with the user’s details and requesting an export (citing GDPR obligations) – OpenAI’s DPA suggests they will comply with such controller instructions
. Plan ahead by testing this process (e.g., perhaps request an export of a test account’s data to see how long it takes and what format it comes in). Document this procedure in your internal GDPR compliance materials or incident response plan. Similarly, have a process for if an employee asks for deletion of their data – likely, the user or an admin can manually delete conversations in the interface, which triggers OpenAI’s 30-day deletion cycle
. For a complete deletion (like wiping an account’s data), coordinate with OpenAI.
- Use Admin Controls and Settings Wisely: Although Team admins cannot see conversations, they can control other aspects. Ensure only authorized persons are made admins. Monitor the membership of the Team workspace (e.g., promptly remove ex-employees so they don’t retain access to company data, and decide what to do with their chat content – perhaps have them transfer any important outputs before departure, since once removed their chats might become inaccessible). If OpenAI provides any logging (maybe of usage statistics) to admins, use those for high-level oversight rather than content surveillance. Also, consider upgrading to ChatGPT Enterprise if your organization requires more robust compliance features: Enterprise offers an audit log and API access to conversation data
, and admin-controlled retention policies
, which can make GDPR compliance (and internal governance) easier for larger-scale deployments.
- Data Protection Impact Assessment: Conduct a DPIA for your use of ChatGPT if it meets the criteria (likely yes if you systematically process work communications through an AI, which could be seen as novel and potentially high-risk). The DPIA should evaluate risks like: potential unauthorized access to personal data in chats, the risk of employees inputting sensitive data, the inability to easily extract data for oversight (which could be a risk for rights fulfillment), and the possible misuse of the tool (e.g., to profile someone). Then document measures taken to mitigate these, many of which are the steps above (policies, technical controls, etc.). Having a DPIA on file will demonstrate compliance and informed decision-making if a regulator ever inquires.
- Respect Employee Rights and Input: Be prepared to address employees’ concerns or rights assertions. If an employee, for instance, is uncomfortable with the company reviewing a certain chat that they deem personal, have a process to handle that (maybe involve the data protection officer or HR to decide if the review is truly necessary and lawful). If an employee exercises their GDPR rights (access, rectification, erasure, restriction) regarding ChatGPT data, coordinate with them and OpenAI to respond within the required timeframe (generally one month). Remember that under GDPR, no retaliation or negative consequences should come from an employee simply exercising their privacy rights.
- Keep Abreast of OpenAI Policy Changes: OpenAI may update its team/enterprise features, especially in response to feedback and regulatory pressure. For example, if OpenAI adds a proper export tool or admin content access in Team, that changes the dynamic (it could ease compliance but also raises the need to ensure any new access is handled properly). Stay updated via OpenAI’s announcements or the “Enterprise Privacy” portal. Likewise, monitor regulatory developments (EDPB statements, DPA guidance) specific to generative AI and employment – this field is evolving, and new best practices may emerge, especially once the EU AI Act comes into force (though that’s separate from GDPR, it will influence how AI is used in workplaces).
I must add that OpenAI is not obligated to sign a DPA.
The big points that are missed in this broad privacy discussion, are that OpenAI should be clear with customers BEFORE they ‘upgrade’ to Team. It is not at all clear that all their pre-upgrade data will no longer be exportable in bulk. It is also not clear that this is an irreversible action. If you are going to lose functionality as part of an upgrade that should be made crystal clear and OpenAI should offer a mitigation.
If a small company ‘upgrades’ their plan, they end-up losing bulk access to their own data and now have to either spend their own time manually copying potentially thousands of conversations, paying someone to do it, or relying on a 3rd party apps.
This is bad customer service and poor business practices. At the very least, OpenAI should ask you if you want to download a JSON file of all of your pre-upgrade conversations before you finalize your “upgrade” and warn you that you are about to lose access to it.
It is not that the export functionality of pro does work
I am trying to get MY OWN DATA for some weeks and the download just breaks after a couple hundret MB.
So data privacy stuff aside… is there a solution that allows me to gather my detailed usage data on a teams subscription as the plan manager? If not I will need to cancel my subscription and move to another service. We’re trying to build a case for buying API access and I can’t do that without a usage estimate to estimate token cost. And there is no point in continuing a subscription for something we can’t grow into.
Yeah, laws aside - is there a way how I could kill the dog of our neighbor who barks all night? pffft.
Who do you think that you are that you can push data privacy stuff aside?