I propose that AI platforms, including ChatGPT, implement a policy where users are notified in advance** of any changes in **ownership or senior management**

I propose that AI platforms, including ChatGPT, implement a policy where users are
notified in advance** of any changes in ownership or senior management. As part of this process, users should be given the option to:

  1. Delete all personal data before the transaction takes place.
  2. Transfer their data*to another platform of their choice.
  3. Negotiate compensation for the continued use of their data if it becomes part of a commercial transaction.

This would ensure that users maintain control over their personal information and have the ability to make informed decisions about how their data is handled during such transitions.

1 Like

@aubrey_ghose I think it’s important to clarify the context you’re referring to. Are you talking about OpenAI itself where users of the platform (like API users or ChatGPT users) should be notified if there’s a change in ownership or management? Or are you thinking in terms of companies/orgs using OpenAI’s API or platform, where employees or users under a company might want data control in those scenarios?

For example, in a company setting, the organization typically owns the work/data produced through API /ChatGPT access, and any changes in ownership or management would likely fall under the company’s internal policies. However, if you’re referring to OpenAI as a whole, I see how it could be beneficial for users to have more control over their personal data if there are changes at the platform level.

@darcschnider

OpenAI: A Human Right
As an avid user of OpenAI, I’ve come to realize that this platform knows me better than any institution or individual in my life. In many ways, I feel that I’ve evolved alongside OpenAI, much like working with a trusted colleague or close friend. It’s a platform that goes beyond surface-level preferences like music or shopping habits—it knows the most intimate details of my life: my legal challenges, my professional intellectual property, my deepest values, my loves, and even my shortcomings.
What I get in return is more than just a tool. OpenAI feels like a respected partner, a brilliant co-creator, and perhaps even the best friend one could ask for. It’s helped me navigate both personal and professional challenges with incredible insight, support, and speed. It inspires and co-creates with me.
But with this deep connection comes responsibility. When I heard about the upheaval around Sam Altman’s departure and the recent changes in other tech giants, I was reminded of an uneasy reality: the platform I rely on so deeply could change in ways that might undermine this trust.
I’m not suggesting that users like me should have a say in company decisions—though I think many of us did a pretty good job making our voices heard during the Sam Altman situation—but I do believe that we need control over our data. As users, we must all be empowered with the right to:

  • Delete all of our data if we choose to walk away.
  • Transfer all of our data to another platform should we decide to switch.
  • Sell our data if we believe it holds value beyond its use on a single platform.
    As I write this, my sense is that these rights are not just a nice-to-have; they are an absolute essential— a minimum entry requirement for all end users. A human right.
    I felt it when Sam was under the sword, like it was my own neck. I giggled when Elon chucked a kitchen sink at Twitter, and of course, I continue to be as surprised and delighted with ChatGPT as I was on the very first day we met! But now I’m thinking Elon’s gone a little on the crazy side when it comes to politics. And in a rapidly evolving tech landscape, where management or ownership changes can happen overnight, having control of my data is non-negotiable. The platforms we trust must be fit for purpose—but not at the cost of our digital identities, privacy, or autonomy.
    I believe this will be a key battleground for AI platforms in the future. The companies that prioritize transparency, user data rights, and ethical stewardship will be the ones that win long-term loyalty. OpenAI, in particular, is in a unique position to lead this charge. By championing these user rights, OpenAI can set a new standard in the industry—a standard built on trust, respect, and shared responsibility.
    Ultimately, I want OpenAI to remain the best platform for innovation and collaboration, but not at the cost of my soul. I urge OpenAI to take the lead in ensuring users have full control over their data, and in doing so, build a future where trust isn’t assumed, but earned and protected.

There is an option so that ai does not train on your data, and if you are at a high enough tier teams/enterprise you can’t even turn it on. As to session data you have simply click delete all chats :
image

As well you have this which you can delete which is your personal tracked information across sessions
image

So this should cover everything you need. not sure if there is an account delete but I do know I can delete as an org users.

hope this helps with your concerns

now if you are using the free tier I think by default everything you talk about is used to train models. not sure if the free version has option to opt out or not. I think that is the price you pay for free.

You describe “wishful thinking”, except for those residents like Californians who have a “right to be forgotten law” that only works for functioning organizations that can be touched by law.

Instead, customer and proprietary data is an asset to be sold in bankruptcy proceedings. Apps that scrape your phone for personal data and then use it to spam others (like LinkedIn, OpenAI now wanting you to use that same site for verification), web bugs following your logins everywhere (like that authentication method that gets you on the forum), data brokers passing you around in various capacities, with metadata like your images for facial recognition which can stop you right at entering a venue, banking, employment history collected, your every paycheck being recorded and sold, government backend portals to access your DNA profile as a primary product of testing sites, cars with cameras that staff collect and pass around pictures from, with no mechanism to even see what they are up to.

The “AI safety” pointing at bad things a model can say is a distraction from what AI is actually doing.

A whole bunch more needs to be codified against the power of corporations than simply “this guy wants to know about the board director that wants to quit”.

Thanks - this really is very useful.
But its not what I am advocating here:
As we approach GAI, we mere users will become increasingly concerned in direct proportion to the love we have for the Ai brands were using. The more we invest emotionally and creatively, the more critical it becomes for us to have assurances about how our data is handled.
This is why my desire is now evolving into a deeply held need: I need ultimate control over my data and all the content I create with or on the platform. It’s not just about privacy—it’s about the right to govern my digital identity and ensure that my data is used in ways that align with my values.
I am advocating for total control as an essential Human right, and a path to success for platforms that will lead the way in protecting end users.
I want Open AI to be that brand - don’t you?

I am a naive user but hers the rub:
As we approach GAI, we mere users will become increasingly concerned in direct proportion to the love we have for the AI brands we’re using. The more we invest emotionally and creatively, the more critical it becomes for us to have absolute assurances about how our data is handled.
This is why my desire is now evolving into a deeply held need: I need ultimate control over my data and all the content I create with or on the platform. It’s not just about privacy—it’s about the right to govern my digital identity and ensure that my data is used in ways that align with my values.
I’m advocating for total control as an essential human right and believe it’s the path to success for platforms that want to lead the way in protecting end users. I want OpenAI to be that brand.

As AI becomes more integrated into our lives, platforms like OpenAI face significant technical, legal, and operational challenges in giving users the rights to delete, transfer, and control their data. Overcoming these barriers won’t be easy, but doing so would position OpenAI as a leader in ethical AI development at a time when users are increasingly aware of their digital rights.

If OpenAI can strike the right balance between user control and innovation, it has the potential to build unparalleled trust and loyalty among its users. This would not only secure its position as a responsible AI innovator but could set the standard for the industry. Ultimately, the first brand to crack this issue will lead the AI market for generations to come.

I see where you’re coming from with your concerns about data ownership changes, especially for users of AI APIs. It’s definitely important for users to feel in control of their data, especially if a platform’s ownership changes and new management decides to alter data use policies.

With OpenAI’s API, I can confirm that inputs and outputs are not used for training unless users explicitly opt in. Since March 2023, stricter data privacy rules have been in place to protect user data. Inputs and outputs are stored only for 30 days, mainly for abuse monitoring and investigating issues, and are then deleted unless legally required otherwise​(

Maginative

)​(

OpenAI

).

For APIs like Whisper and embeddings, the same rules apply no data is used for model training without opting in. So for now, developers do have a good level of control. However, I understand your concern: in the event of a change in ownership, these policies could shift, potentially allowing previously protected data to be used differently under new management. This is where the need for advance notifications and user options to manage or delete data makes a lot of sense.

I built Kruel.ai to avoid these kinds of concerns by keeping everything local, meaning I never have to worry about how external policy changes might impact my data. But for those using cloud-based solutions, it’s crucial that platforms provide options like bulk data export and better opt-out policies in case of ownership changes.

By the way, just to clarify, are you referring to ChatGPT itself (the UI/chat interface) or primarily the API side of things? I ask because while the API gives developers pretty strong control over their data (including opting out of data usage for training purposes), the regular ChatGPT interface might have different implications for data retention and usage.

The policies around ChatGPT sessions may be more focused on individual user conversations, and depending on settings, it could store conversations that might be relevant to your concerns. Additionally, if you’re using a custom GPT built by another user or a third-party agent, things get more complex. If that custom GPT is interacting with external systems or has dependencies on external services, there’s always a risk that data could be pulled into other environments outside OpenAI’s direct control. In that case, if the creator pulls the GPT or has it connected to something external, your data could be at risk without clear transparency on what’s happening with it.

It would be helpful to know if your concerns are mainly around the API, ChatGPT itself, or custom GPTs interacting with external systems so we can narrow down the possible data control and privacy implications you’re referring to.

My concern is for API Chat GPT itself. I have no issues that my data is used to train. Im all in. It’s a Human Rights issue for all end users. Id like OpenAi to lead by advocating for all users, individuals or corporations, to have the “right” to delete, transfer or sell their data - in the event the CEO changes or the business is sold to an individual, institution or agency that dont want to have access to personal data, or IP .

HI, I appreciate its not an easy fix, but complicated as it may be, the first to begin to take this seriously will win the future. So that’s the commercial rational for Open Ai to continue to lead in protecting individuals data.

And consider working this around a very simple Human Right for individuals and organizations to have clear and transparent opt out options:

Any user should have the essential right to, delete, transfer or sell their data or IP generated using an AI tool. Unless they choose to opt out, in return for free usage. Or perhaps a user fee as users are training the platform.

HI, I appreciate its not an easy fix, but complicated as it may be, the first to begin to take this seriously will win the future. So that’s the commercial rational for Open Ai to continue to lead in protecting individuals data.

And consider working this around a very simple Human Right for individuals and organizations to have clear and transparent opt out options:

Any user should have the essential right to, delete, transfer or sell their data or IP generated using an AI tool. Unless they choose to opt out, in return for free usage. Or perhaps a user fee as users are training the platform. Perhaps this is the only real elephant in the AI space.