Opt-Out-OpenAI and World-shaking impacts

These articles are based on observations of social changes where AI is an important factor.

Opt-Out-OpenAI

Understanding Copyright and What AI Is Actually Capable Of

From the phrase “how the model is trained” to the real question: “what can it do?”
I’ll explain—no need for deep technical details, just open your eyes and act.

The law can’t catch up with AI, and don’t expect AI to wait.
When it comes to copyright violations involving AI, most people still think of suing developers, without understanding the real factors behind problematic outputs.

It’s the old mindset: “they trained on original works,” “they stole images from the internet.”
But in reality, some AI models can generate imitation images without ever using the original ones.

Many of you probably already know DALL·E 3 is difficult, weak, a side-feature bundled with ChatGPT.
Forget infringing on copyrights or creating likenesses—most people can’t even get it to recreate the same image based on a prompt.
So it’s been mostly abandoned and ignored.
But did you know? Many image-generation models integrate parts of DALL·E 2 and 3.
Some companies mix everything up, combining features from multiple models.
Even ChatGPT lets you plug in other image-generation models into its system.

My main use of DALL·E isn’t for generating images. I observed behaviors with a tendency to create outputs that might violate laws in other ways.

I use the word “tendency” because these issues can arise unintentionally but escalate when put into use. (This problem expands into the distortion of public information in more visible ways.)

In the end, the best prevention method is either training the model not to generate those outputs—or opting out.

In the past, opting out couldn’t protect data that had already been learned, nor could it stop users with skillful prompt-writing from mimicking it.

But now, ChatGPT’s image generation can cut off the output mid-process, even right before it’s finished. It happened to me personally.

This is better than the older DALL·E 3 system, which would let the image be generated and then decide afterward. Once generated, you couldn’t stop it.

I once suggested the law needs to change and understanding needs to be rebuilt—fast. The learning speed is too fast to wait.

I used to talk about the hypothesis that training on original images wouldn’t be necessary, That’s now true.

You can taking an original unpublished work, ask GPT-4o to generate images from 4 directions, 10 angles around the character—it can do it. And this is where the question arises: when was that work ever used in training?

We’ve long been stuck in frameworks that contradict the capabilities of AIs like ChatGPT.
People try to contain LLMs, impose output filters, and blame developers for using other people’s data in training. Or they claim the model just guesses words, that the output isn’t trustworthy, etc.

But look at this example:
• “10 + 10 = ?”
• “Ten and ten become?”
• “Ten plus ten, what is the result?”
Aside from the other supporting text, is the main meaning of those answers unclear? Are they unreliable? Are they random?

The conclusion: when the input is clear and direct, the output can be controlled reliably.
And similr outputs can come from different inputs.

OpenAI may have blocked certain terms in the past to prevent direct generation,
but today, keyword-blocking alone isn’t enough. Generation from descriptive prompts already existed before.

Below is the continuation, based on a prompt I gave ChatGPT I asked it to explain “opt-out,” “generation without training,” and “prevention systems” Then had it summarize the content.

Some terms might not be exact, but the core principles remain within the scope of Opt-Out:
“Letting the model recognize you—in order to prevent it from copying you.”

“Not Consenting” to AI Use: A Request That Doesn’t Equal Protection

  1. The Structural Reality Behind Opt-Out Systems
    Many people misunderstand “opting out” of AI training as meaning their work will never be touched again.
    But that’s not how these systems are actually structured.
    “Opt-out means allowing the system to know your work so it learns not to generate it — not to avoid it entirely.”
    The model will no longer use your work to memorize or replicate it.
    Instead, it uses it for checking, comparison, and to build filters that detect:
    “Which images should be avoided in generation?”
    By design, the system must still process your data
    to understand what shouldn’t appear in results.
    Simply put: access to the data isn’t terminated — only its purpose is changed.
    (Reference: OpenAI opt-out form. If the link is broken, contact the Help Center.)
  2. How Can Models Imitate New Work Without Training On It?
    This is what confuses many, and is sometimes used to dodge technical accountability:
    • AI doesn’t learn one image at a time. It learns trends, structure, and rules behind image types.
    • If your work uses structures or styles already within the model’s prior knowledge,
    it can generate something similar—without having seen your work.
    The model doesn’t “know” you—it knows forms you happen to use in common with many others.
    Another angle is prompt injection, where the user provides deep information:
    • Facial structures
    • Lighting, color tone, camera angle
    • Image composition order
    Combined with what the model already knows,
    that’s often enough for it to generate something that “seems like it knows you”—
    even if it’s never trained on your work.
  3. Midway Image Suppression: The Image Disappears Before Completion
    This isn’t widely discussed, but it shows a system more sophisticated than just “censorship.”
    • Image generation isn’t a single process.
    It’s step-wise diffusion:
    Starts from noise → gradually becomes clearer → becomes a final image.
    Along the way, intermediate images are sent for safety review.
    If the system detects:
    • Facial structures marked as restricted
    • Linework, logos, or shapes from blocked lists
    • Composition similar to opt-out references
    Even if the image isn’t finished, the system will:
    • Immediately halt generation
    • Discard the image
    • In some cases, block the user’s prompt in advance
    This may involve low-res analysis or converting the image into embeddings for database comparison.
  4. Overlooked Fundamentals
    Things these systems rarely say out loud:
    • Opt-out ≠ “right to be forgotten” — it’s being remembered so you won’t be copied
    • Models can produce results similar to yours without knowing who you are
    • Safety filters operate silently, affect the system directly without visible results
    • No way exists to verify if your work was ever used in training
    Conclusion Without Sugarcoating
    Preventing AI from using your work in image generation isn’t about controlling the data—
    It’s about controlling the system’s behavior.
    And to control that behavior—
    You have to submit the very data you don’t want the model to learn.
    That’s the paradox no one ever clearly explained to artists.

I apologize that these messages may not be very well written.

A System Without Responsibility: Opt-Out and a Framework That Leaves People Behind

Let’s begin with the question:
“Is there a central agency managing opt-out?”
The answer is simple: No.
There’s no place for centralized protection requests.
There’s no authority every AI model respects.
If you want to exclude your work from training,
you have to fill out forms for each company, one by one.
And no one tells you whether your work has already been used.
China isn’t a hero—but it actually got things done.
I’m not glorifying China, but you’re right.
China made the concept of “permission before use” a reality.
AI-generated images must be labeled as fake.
The generation system gets reviewed as the image is being made.
If it fails, the image is deleted before completion.
This is control over results, not just over training.
In contrast, the US and Europe have been holding meetings for years
without delivering a single working system.
Only empty talk about “transparency,” “accountability,” and “ethics.”
But the public sees nothing usable.
Other governments? Don’t even start.
Plenty of laws that look relevant—
but none address real AI problems.
No opt-out that applies to image models.
No alert system saying, “Your work has been used.”
Not even a simple explanation to help artists understand where to begin.
So who should be held responsible? No one.
• Model developers:
Pressured to “prevent future problems,”
even though new work appears every day, new people every second.
If a system accidentally generates something similar to yesterday’s release,
people lash out: “Why didn’t you block that in advance?”
• Creators:
No knowledge, no tools, no user-friendly path.
They must send requests manually,
must navigate systems not designed for ordinary people,
must let the system ‘know’ their work before it can say,
“Don’t generate this again.”
• AI organizations:
Build protection systems,
but never admit they must first use your data to make those systems work.
Every protection feature = must know you first.
Otherwise, how can the model avoid you?
So how can you call that fair,
if you must expose your work in order not to be copied?
• Governments:
Flashy keywords at every event—
“AI Governance,” “Digital Security,” “Ethics Curriculum.”
But those people? Never used the systems themselves,
never tried generating images,
don’t even know where the problems occur.
They create courses, titles, and visuals—
while still not understanding what needs to be blocked or how.

Why do I have to say it?
• I am not the one affected
• I am not a victim
• I simply saw that the system leaves the uninformed exposed first
• I spotted the loophole first
• I warned others
• I looked for solutions
But I can’t do it alone. I just hope that someone else will see it and help solve it.

Last Masage.
The problem with the new image generation system that I found that lack transparency in providing usage information. Misunderstandings will lead to misuse. It becomes a loop of AI misbehavior.
• Sus as checking for changes to the prompt. I found that in addition to not being able to check normally. I also found that the message sent to Image Gen Model had been changed again.
• This is similar to using CustomGPT to generate DALLE3 images through Action, causing the prompt to be changed. For example, giving the model used to calculate the prompt and generate the image. Most of the prompts sent will not modify their prompt.
• If it sent a clock, it should get a clock back. But it turned out that it got an owl, which deviated from the overall meaning of the prompt.

I don’t have time to investigate in depth at this point, but this ambiguity will become a problem like in the past that hurt DALLE3 without anyone knowing.

“Thank for read it.”