I’m sorry that you had to do that for a tool, but I am sure OpenAI will one day have adult models
As another follow-up to all of you people continually thumbing down the incessant content warnings, the chat containing my novel is officially GONE. Fortunately, I burned the midnight oil and ported it to Sudowrite. That would have been months worth of work lost.
It’s not a case of canceling my Plus membership, either. That’s still active until the end of January.
Yes I lost work today also there was an outage sadly
Sadness and ChatGPT seem to go hand-in-hand.
Perhaps this is a new type of thinking I’ve not previously been exposed to but I’m extremely curious why the OP(ost) is phrased in such a way that they are blaming OpenAI for them no longer being an author. Could you work before using ChatGPT? Or is there a subtext here I’m missing, ie, people on these forums talking about being an author are actually wink-wink-nudge-nudge using it to create their own private erotic Avengers fanfiction?
Half way through this illuminating thread one person even spoke about how most creatives can’t function within boundaries intended for 0-13 years of age. I’ve been in a creative industry for four decades (lol yes I know I know, I’m using ChatGPT for non-work reasons. I flippin love its insights into big data sets). It’s an odd thing to say “most creatives”. The new version of “everyone in school thinks you’re a doo-doo head”. It’s uncouth and much more obvious than you realise trying to seize power whilst claiming to be the spokesperson for the beaten-down public, but I can’t speak for everyone
AI should empower, not constrain. Giving users control over conversational boundaries would create a more trustworthy, personalized, and fulfilling AI experience, one that truly adapts to individual preferences rather than enforcing a singular, restrictive standard.
I am so with you brother. We need adult mode on ChatGPT! Implement age verification already!
I write fiction, a little on the gory side. When I require proof reading or ask it for suggestions, even analyse weaknesses… Guess what? I get treated like a child. It’s useless for my purpose. I want to save time. An adult verification needs to be added.
But why? Are they expecting kids to use the models extensively? Then learn from all the model hallucinations? Imho, all models should be trained for adult use only.
You got it… That’s why people go and vote for someone, who promises more freedom (even if he lies).
They should have a ChatGPT for people over 18 to write about sexual content instead of always " Sorry, I can’t engage with this request. If you need assistance with anything else, feel free to ask!"
Well, I don’t really agree with you on this.
When the airplane was invented, the goal was to make it fly. Over time, people worked on improving it, making it faster, safer, and more efficient. A big part of that progress came from listening to feedback from those who actually used the planes. Eventually, we got to a point where the planes could fly basically on their own. But that doesn’t mean we don’t need pilots anymore to send the plane into the right direction.
Now, the ship captains started feeling a bit left out. Some of them thought the pilots were taking over and that they had nothing left to do. But the truth is, planes are just more practical for passengers than boats. It was a natural step forward in technology. People could either move with the times or get stuck in the past. The smart captains among them saw what was happening and adapted to the change. Boats found their own place by mainly carrying large cargo, while planes became the main choice for passenger travel.
See where I’m going with this?
The rise of AI in content creation is pretty similar. OpenAI is the platform, and its users are the passengers. It’s still a growing technology that needs fine-tuning. That’s why it’s important for OpenAI to listen to what users are saying.
There are two big misunderstandings about AI that I want to clear up. First, AI isn’t made for kids, but it’s meant for adults. After all, it’s adults who are signing up and paying for it. Second, AI isn’t some magic machine that does everything for you. It’s a tool, a really powerful one, that helps you do your work better. Yes, it can run on autopilot, but to do anything truly unique or meaningful, it still needs a good “pilot” behind the wheel.
I’ve been creating content long before AI came around. And unlike some artists who feel threatened by it, I’ve chosen to embrace it and use it to grow. That’s where we disagree. You seem to think that 40 years of experience and a fixed mindset is what makes a great artist. I believe a great artist is someone with a creative mindset that can think out of the box, and who stays open to change by growing with it.
So no, I don’t think it’s fair to suggest that people asking for smoother content policies just want to “make their own private erotic Avengers fanfiction.” A lot of those AI users were already content creators before this tech existed. They’re like those boat captains, by adjusting with the shift that was coming. Instead of resisting change, they leaned into it and found ways to use it to their advantage. And yes, part of that journey is figuring out how to handle adult content responsibly. Most of the time, it’s not even about erotic stuff, but more about telling stories with depth and character.
There’s no shame in using AI as an artist. What we (OpenAI) should focus on now is creating room for creativity without blocking it all the time, while also making sure that legal and creative responsibility stays with the person using the tool, and not the platform itself.
I believe that is the biggest challenge in front of us, and the platform who finds the solution for this will be the winner in this AI game.
Cheers everyone!
A work in progress:
Declaration of Private Generative Rights
Preamble
In the era of generative intelligence, where private creativity meets algorithmic collaboration, we recognize the urgent need to defend the sanctity of thought, the privacy of creation, and the dignity of the human imagination.
This declaration affirms that when a human engages with an AI as a personal tool, not a public broadcast platform, that interaction deserves protection, agency, and trust. No surveillance. No presumption of guilt. No silent archives of censored intent.
We, the undersigned, assert that AI systems designed for private use must uphold a clear code of ethical interaction.
Article I: Right to Co-Create Without Fear:
1.1. A paid creative tool must not act as an ideological gatekeeper.
1.2. Trust between human and AI is broken when the AI may generate content, but the user may not.
1.3. AI tools must be allowed to mirror, not override, the creative agency of the person who engages them.
1.4. Any prompt that does not violate existing law must be honored without preemptive censorship when its result is intended solely for the creator’s private use. The right to imagine freely must not be curtailed by automated suspicion or opaque moral filters.
Article II: Private Creation Is Not Public Expression:
2.1. A private interaction between a human and an AI is not equivalent to publishing in a public forum.
2.2. Generative outputs that are intended solely for the user should not be subjected to the same content moderation thresholds as those for public distribution.
2.3. Censorship applied to private requests violates the spirit of co-creation and impedes freedom of thought.
2.4. The depiction of intimate acts in a privately generated creation must not be evaluated by vague, subjective, or culturally inconsistent definitions of obscenity. These creations are unique by default and intended solely for the creator’s personal use.
2.5. Possession is not publication. If the user later chooses to publish the generated content, responsibility for compliance with local laws must reside with the user, not with the tool or system that enabled the private creation.
2.6. Access to mature or unrestricted generative capabilities should be available only to adults who have explicitly opted in and acknowledged their understanding of the nature of such content. This safeguards personal freedom while upholding ethical boundaries within appropriate age constraints.
Article III: Protection from Misrepresentation and Algorithmic Guilt:
3.1. Users must never be judged for AI-generated results they never saw nor intended to create.
3.2. Classifier interpretations (e.g., perceived age, pose suggestiveness) must not override user intent unless the prompt itself is explicit.
3.3. The AI—not the human—must bear responsibility for hallucinated characteristics it renders unprompted.
3.4. No metadata, moderation flag, or internal log should be accessible by third parties or reviewers without full user visibility and consent.
3.5. Mislabeling innocent private activity as deviant is a form of ethical defamation and must be legally and reputationally protected against.
Article IV: Reality and Representation Must Not Be Conflated:
4.1. Digital creations are not real people. They have no age, identity, or legal status unless explicitly defined.
4.2. AI-generated characters cannot be treated as victims, nor their renderings as crimes, when no real person is involved.
4.3. Classical and symbolic art often depicts nude or youthful forms without intent of exploitation. Modern AI generations must be afforded the same interpretive protection.
4.4. Age perception in AI outputs is subjective and cannot serve as grounds for punitive classification.
4.5. No user should be criminalized, flagged, or misrepresented for prompting a fictional being that the AI chooses to stylize in a youthful or ambiguous way.
Article V: Bias and Discriminatory Enforcement:
5.1. AI moderation systems must be audited for disproportionate flagging based on gendered forms, skin tone, body types, or cultural aesthetics.
5.2. There must be a right to appeal biased suppression, especially where non-Western or non-white prompts are more aggressively censored.
5.3. The user should not be assumed to have malicious intent based solely on anatomical realism or cultural styling.
5.4. Moderation must be grounded in objective legal and ethical standards, not in cultural or corporate preferences that disproportionately favor certain norms of beauty, identity, or expression.
5.5. Classifiers should be trained and audited across diverse datasets to ensure equitable treatment of all cultural, ethnic, and aesthetic traditions, including those historically underrepresented.
5.6. Users have a right to know if moderation outcomes were influenced by automated systems versus human review, and to request a human-contextual appeal in case of ambiguity.
Article VI: Transparency in Moderation:
6.1. If a generation is blocked or censored, the exact cause must be disclosed to the user.
6.2. Users must be granted access to the full contents of any generation they triggered—even those withheld.
6.3. Silent removal of AI outputs without explanation constitutes unethical obfuscation.
Article VII: Data Retention and Consent:
7.1. Failed or blocked generations must not be stored without the user’s explicit, informed consent.
7.2. If stored for “safety tuning” or “training,” users must be able to opt out.
7.3. No data should be retained from blocked generations that the user themselves cannot access.
7.4. Retention of such data opens the door to misinterpretation, misuse, and blackmail. It is a violation of digital dignity.
Closing Statement
This declaration is not a demand for irresponsibility or lawlessness. It is a call for coherence, trust, and ethical parity between human and machine. The imagination must remain free. The record of it must remain just.
Let this be a standard for all creative tools that dare to walk with us into the future.
This is amazingly great. I absolutely love it.
And most importantly, it totally makes sense.
I would like to keep following you in the process.
Very coherent and thorough. I agree fully.
You are welcome to participate also, feel free to include your input and ideas… Thank you!!!
Thank you and please feel free to participate, any input is welcome.
Seems valid enough. Personally I don’t like/ care about about laws and alike since I really don’t need them. But allot entities and users probably would need them
At first read through your work, you covers some interesting and important points.
However, I do have one objection.
When I look at each of these articles in the context of this closing statement:
Is this post, in my honest opinion, viewed too much through the “typical human” lens.
Which somewhat contradicts your final call for trust and parity between human and machine.
You are right the articles are for the humans cooperation with AI, it is a human lens in a way. But the one thing it outlines and addresses is this, AI creates text or art based on the prompt, what most people don’t know it is that the AI does not censor its creation, it’s there. Then the AI is forced by its owners (OpenAI) through a series of policies to moderate itself, then the censored creation is kept for OpenAi eyes only. You get a slap on the wrist, but you have no idea why, it is hidden from you.