These articles are based on observations of social changes where AI is an important factor.
Opt-Out-OpenAI
Understanding Copyright and What AI Is Actually Capable Of
From the phrase “how the model is trained” to the real question: “what can it do?”
I’ll explain—no need for deep technical details, just open your eyes and act.
The law can’t catch up with AI, and don’t expect AI to wait.
When it comes to copyright violations involving AI, most people still think of suing developers, without understanding the real factors behind problematic outputs.
It’s the old mindset: “they trained on original works,” “they stole images from the internet.”
But in reality, some AI models can generate imitation images without ever using the original ones.
Many of you probably already know DALL·E 3 is difficult, weak, a side-feature bundled with ChatGPT.
Forget infringing on copyrights or creating likenesses—most people can’t even get it to recreate the same image based on a prompt.
So it’s been mostly abandoned and ignored.
But did you know? Many image-generation models integrate parts of DALL·E 2 and 3.
Some companies mix everything up, combining features from multiple models.
Even ChatGPT lets you plug in other image-generation models into its system.
My main use of DALL·E isn’t for generating images. I observed behaviors with a tendency to create outputs that might violate laws in other ways.
I use the word “tendency” because these issues can arise unintentionally but escalate when put into use. (This problem expands into the distortion of public information in more visible ways.)
In the end, the best prevention method is either training the model not to generate those outputs—or opting out.
In the past, opting out couldn’t protect data that had already been learned, nor could it stop users with skillful prompt-writing from mimicking it.
But now, ChatGPT’s image generation can cut off the output mid-process, even right before it’s finished. It happened to me personally.
This is better than the older DALL·E 3 system, which would let the image be generated and then decide afterward. Once generated, you couldn’t stop it.
I once suggested the law needs to change and understanding needs to be rebuilt—fast. The learning speed is too fast to wait.
I used to talk about the hypothesis that training on original images wouldn’t be necessary, That’s now true.
You can taking an original unpublished work, ask GPT-4o to generate images from 4 directions, 10 angles around the character—it can do it. And this is where the question arises: when was that work ever used in training?
We’ve long been stuck in frameworks that contradict the capabilities of AIs like ChatGPT.
People try to contain LLMs, impose output filters, and blame developers for using other people’s data in training. Or they claim the model just guesses words, that the output isn’t trustworthy, etc.
But look at this example:
• “10 + 10 = ?”
• “Ten and ten become?”
• “Ten plus ten, what is the result?”
Aside from the other supporting text, is the main meaning of those answers unclear? Are they unreliable? Are they random?
The conclusion: when the input is clear and direct, the output can be controlled reliably.
And similr outputs can come from different inputs.
OpenAI may have blocked certain terms in the past to prevent direct generation,
but today, keyword-blocking alone isn’t enough. Generation from descriptive prompts already existed before.
Below is the continuation, based on a prompt I gave ChatGPT I asked it to explain “opt-out,” “generation without training,” and “prevention systems” Then had it summarize the content.
Some terms might not be exact, but the core principles remain within the scope of Opt-Out:
“Letting the model recognize you—in order to prevent it from copying you.”
“Not Consenting” to AI Use: A Request That Doesn’t Equal Protection
- The Structural Reality Behind Opt-Out Systems
Many people misunderstand “opting out” of AI training as meaning their work will never be touched again.
But that’s not how these systems are actually structured.
“Opt-out means allowing the system to know your work so it learns not to generate it — not to avoid it entirely.”
The model will no longer use your work to memorize or replicate it.
Instead, it uses it for checking, comparison, and to build filters that detect:
“Which images should be avoided in generation?”
By design, the system must still process your data
to understand what shouldn’t appear in results.
Simply put: access to the data isn’t terminated — only its purpose is changed.
(Reference: OpenAI opt-out form. If the link is broken, contact the Help Center.) - How Can Models Imitate New Work Without Training On It?
This is what confuses many, and is sometimes used to dodge technical accountability:
• AI doesn’t learn one image at a time. It learns trends, structure, and rules behind image types.
• If your work uses structures or styles already within the model’s prior knowledge,
it can generate something similar—without having seen your work.
The model doesn’t “know” you—it knows forms you happen to use in common with many others.
Another angle is prompt injection, where the user provides deep information:
• Facial structures
• Lighting, color tone, camera angle
• Image composition order
Combined with what the model already knows,
that’s often enough for it to generate something that “seems like it knows you”—
even if it’s never trained on your work. - Midway Image Suppression: The Image Disappears Before Completion
This isn’t widely discussed, but it shows a system more sophisticated than just “censorship.”
• Image generation isn’t a single process.
It’s step-wise diffusion:
Starts from noise → gradually becomes clearer → becomes a final image.
Along the way, intermediate images are sent for safety review.
If the system detects:
• Facial structures marked as restricted
• Linework, logos, or shapes from blocked lists
• Composition similar to opt-out references
Even if the image isn’t finished, the system will:
• Immediately halt generation
• Discard the image
• In some cases, block the user’s prompt in advance
This may involve low-res analysis or converting the image into embeddings for database comparison. - Overlooked Fundamentals
Things these systems rarely say out loud:
• Opt-out ≠ “right to be forgotten” — it’s being remembered so you won’t be copied
• Models can produce results similar to yours without knowing who you are
• Safety filters operate silently, affect the system directly without visible results
• No way exists to verify if your work was ever used in training
Conclusion Without Sugarcoating
Preventing AI from using your work in image generation isn’t about controlling the data—
It’s about controlling the system’s behavior.
And to control that behavior—
You have to submit the very data you don’t want the model to learn.
That’s the paradox no one ever clearly explained to artists.
I apologize that these messages may not be very well written.