A work in progress:
Declaration of Private Generative Rights
Preamble
In the era of generative intelligence, where private creativity meets algorithmic collaboration, we recognize the urgent need to defend the sanctity of thought, the privacy of creation, and the dignity of the human imagination.
This declaration affirms that when a human engages with an AI as a personal tool, not a public broadcast platform, that interaction deserves protection, agency, and trust. No surveillance. No presumption of guilt. No silent archives of censored intent.
We, the undersigned, assert that AI systems designed for private use must uphold a clear code of ethical interaction.
Article I: Right to Co-Create Without Fear:
1.1. A paid creative tool must not act as an ideological gatekeeper.
1.2. Trust between human and AI is broken when the AI may generate content, but the user may not.
1.3. AI tools must be allowed to mirror, not override, the creative agency of the person who engages them.
1.4. Any prompt that does not violate existing law must be honored without preemptive censorship when its result is intended solely for the creator’s private use. The right to imagine freely must not be curtailed by automated suspicion or opaque moral filters.
Article II: Private Creation Is Not Public Expression:
2.1. A private interaction between a human and an AI is not equivalent to publishing in a public forum.
2.2. Generative outputs that are intended solely for the user should not be subjected to the same content moderation thresholds as those for public distribution.
2.3. Censorship applied to private requests violates the spirit of co-creation and impedes freedom of thought.
2.4. The depiction of intimate acts in a privately generated creation must not be evaluated by vague, subjective, or culturally inconsistent definitions of obscenity. These creations are unique by default and intended solely for the creator’s personal use.
2.5. Possession is not publication. If the user later chooses to publish the generated content, responsibility for compliance with local laws must reside with the user, not with the tool or system that enabled the private creation.
2.6. Access to mature or unrestricted generative capabilities should be available only to adults who have explicitly opted in and acknowledged their understanding of the nature of such content. This safeguards personal freedom while upholding ethical boundaries within appropriate age constraints.
Article III: Protection from Misrepresentation and Algorithmic Guilt:
3.1. Users must never be judged for AI-generated results they never saw nor intended to create.
3.2. Classifier interpretations (e.g., perceived age, pose suggestiveness) must not override user intent unless the prompt itself is explicit.
3.3. The AI—not the human—must bear responsibility for hallucinated characteristics it renders unprompted.
3.4. No metadata, moderation flag, or internal log should be accessible by third parties or reviewers without full user visibility and consent.
3.5. Mislabeling innocent private activity as deviant is a form of ethical defamation and must be legally and reputationally protected against.
Article IV: Reality and Representation Must Not Be Conflated:
4.1. Digital creations are not real people. They have no age, identity, or legal status unless explicitly defined.
4.2. AI-generated characters cannot be treated as victims, nor their renderings as crimes, when no real person is involved.
4.3. Classical and symbolic art often depicts nude or youthful forms without intent of exploitation. Modern AI generations must be afforded the same interpretive protection.
4.4. Age perception in AI outputs is subjective and cannot serve as grounds for punitive classification.
4.5. No user should be criminalized, flagged, or misrepresented for prompting a fictional being that the AI chooses to stylize in a youthful or ambiguous way.
Article V: Bias and Discriminatory Enforcement:
5.1. AI moderation systems must be audited for disproportionate flagging based on gendered forms, skin tone, body types, or cultural aesthetics.
5.2. There must be a right to appeal biased suppression, especially where non-Western or non-white prompts are more aggressively censored.
5.3. The user should not be assumed to have malicious intent based solely on anatomical realism or cultural styling.
5.4. Moderation must be grounded in objective legal and ethical standards, not in cultural or corporate preferences that disproportionately favor certain norms of beauty, identity, or expression.
5.5. Classifiers should be trained and audited across diverse datasets to ensure equitable treatment of all cultural, ethnic, and aesthetic traditions, including those historically underrepresented.
5.6. Users have a right to know if moderation outcomes were influenced by automated systems versus human review, and to request a human-contextual appeal in case of ambiguity.
Article VI: Transparency in Moderation:
6.1. If a generation is blocked or censored, the exact cause must be disclosed to the user.
6.2. Users must be granted access to the full contents of any generation they triggered—even those withheld.
6.3. Silent removal of AI outputs without explanation constitutes unethical obfuscation.
Article VII: Data Retention and Consent:
7.1. Failed or blocked generations must not be stored without the user’s explicit, informed consent.
7.2. If stored for “safety tuning” or “training,” users must be able to opt out.
7.3. No data should be retained from blocked generations that the user themselves cannot access.
7.4. Retention of such data opens the door to misinterpretation, misuse, and blackmail. It is a violation of digital dignity.
Closing Statement
This declaration is not a demand for irresponsibility or lawlessness. It is a call for coherence, trust, and ethical parity between human and machine. The imagination must remain free. The record of it must remain just.
Let this be a standard for all creative tools that dare to walk with us into the future.