I’ve been reading more reports about deepfake images being used to target children and teenagers, cases where synthetic explicit photos of minors are generated without their knowledge or involvement.
Schools are already dealing with situations where students create deepfake images of classmates, and predators can now produce sexualized imagery of children without any physical access to them.
Here is one recent example (AP/PBS):
This issue sits at the intersection of technology, online safety, child protection and education.
Here are two questions I’m trying to understand better, but please feel free to add your own angles or follow-up questions:
What kinds of technical safeguards or detection methods are being developed to identify deepfake images involving minors?
Could stronger image provenance/watermarking systems realistically help reduce the spread or credibility of harmful synthetic content involving minors?
Any perspectives are welcome.
I’m trying to understand which approaches might actually help protect children and teenagers from this kind of misuse.
We have actually built a script that can identify such images. If there is a real need for that (other than in the insurance market where we are rolling that out in Q1) send me a private message.
I am also building an education management system which has that stuff built in then.
The social media with more as 50 % (I ssy directly any company) AI or fake.
I should to research studies - I maked…
But the mental health of the kids or the “influencers” whitout backgroud in the “life shows” → Yes, I can too, I make that so too, Superhumans and the Brothers… or Women with AI perfection - naturally business, when a man is so qualified.
The security will be more important.
With Agents too.
The other company has a system - the customers are not enough informated - with “perdonally life card”… to all personal security data…
This is an important and genuinely difficult issue, and I appreciate the seriousness of the concern.
That said, I think it’s also crucial to be careful about where we focus the discussion, especially in open technical forums.
From my perspective, harm involving minors and synthetic imagery is not primarily a detection problem, even though detection often becomes the most discussed technical angle.
On your first question:
While there is ongoing work around classifiers, forensic signals, and content analysis, history suggests that single-layer detection approaches inevitably enter an escalation loop. As detection methods become more public and more precise, they also become easier to probe, benchmark against, and ultimately bypass. For that reason, many practitioners increasingly view detection as one component in a broader system, rather than a standalone safeguard.
What tends to matter more in practice are multi-layered measures that sit around the model and the content lifecycle: platform-level risk signals, abnormal generation or sharing patterns, fast takedown and reporting pipelines, human review for high-risk cases, and clear legal escalation paths when minors are involved. These measures are less about “catching perfect fakes” and more about reducing real-world harm and amplification.
On your second question regarding provenance and watermarking:
Source attribution and watermarking can help with post-hoc accountability and credibility assessment, particularly for journalists, platforms, and courts. However, they are unlikely to function as a primary protective barrier for minors. Even strong provenance systems do not prevent generation or initial misuse; at best, they help establish context and responsibility after the fact. Treating them as a silver bullet risks overestimating what technical markers alone can achieve.
More broadly, I worry that discussions framed too narrowly around how to detect or how to verify synthetic images can unintentionally miss the larger issue:
Why these systems are accessible in harmful ways, why harmful outputs are able to circulate socially, and why institutional responses (schools, platforms, legal systems) are often slower than the damage itself.
Protecting children and adolescents here likely depends less on ever-more clever technical tricks, and more on clear red lines, strong platform responsibility, rapid response mechanisms, and education that addresses misuse as a form of abuse rather than as a technical curiosity.
In other words, this problem sits at the intersection of technology, governance, and human systems — and over-optimizing the technical layer alone may give us the illusion of progress while leaving the underlying harm pathways intact.
Your points make sense, especially the distinction between detection as a single layer vs. broader systemic safeguards. I agree, this issue doesn’t belong to one domain, and any real protection involves technical, legal, educational, and institutional pieces working together.
My perspective comes from the education side, so I’m trying to understand the technical layer better, not because I think it’s the whole solution, but because this forum is the place where that part of the puzzle is actually developed. If I focus the question narrowly here, it’s only because I’m trying to expand my understanding of what can be done on the technical side, not because I see it as the only answer.
Your breakdown helps clarify where the realistic limits and leverage points are. And yes, even when the topic is sensitive, it’s still important to surface it, across disciplines. Protecting children and teens is shared responsibility, regardless of which field we come from.
That does sound genuinely interesting and it definitely makes me curious to understand more about both the script and the education management system you’re building.
I’m approaching these issues from an education/child-protection angle, so I’m deepening my understanding of the technical side now that I have the space to focus on it.
If you ever feel like sharing more here or via DM… feel free.
LLMs accept any form of input. It’s next to impossible to guard against the entirety of human language. Not just English, but all languages - including Elvish.
Same goes for images.
It’s one thing to catch people exploring boundaries. It’s another to protect against malicious users.
There need to be real-life consequences.
It’s the wrong abstraction. It’s not “how do we technically solve this”, but “how do we make the potential profit from this as small and risky as possible”.
Anyone thinking otherwise is chasing the dragon’s tail.
Hi! im new to this platform. But as a adult I can say the only one solution for this is, “Propper gudence”. As a human beings we all have emotions. So no one can escape from it. as per my experience in our childhood we afraid of some sort of older ppl only. we dont have any digital devices. yey we need development, techniques. We need to know to how to use it. Knife can be use for murderers or for sergery. And in there also surgeries can be merder or recovery. So education is not enough for future.
Then What?
Faith and Descipleen. From where we can get it? By a code? By a market? No only from any religion.
My personal veiw is we should guide, students to be come ell discipline. simple. Good luck! Think ….
You’re posting this here? In an open AI forum, you can’t even get their bot to make an image of a cannabis plant, or babies at the gym in orange shorts without getting censored.
Good luck trying to get it to make deepfakes you can’t even get passed the censorship of things that shouldn’t be censored in the first place.
I wouldnt try to do it myself, as “that is just training data” (g) is not going to fly and even if it did, I understand the people who have to look at the images, need therapy for years afterwards.
So, straight up, contract it out, no discussion needed IMO.
I strongly suggest reaching out to experts on bullying and cyberbullying such as Barbara Coloroso and others who have demonstrated an interest in and have a lifelong expertise in addressing these issues on the practical side and who understand the social-emotional dynamics—people who are in the longterm trenches with this very messy territory. A thorough grounding in that expertise is a prerequisite to addressing this ongoing and escalating harm on the technical side. The experts will learn from AI developers and vice versa. Great benefit will come from this. I would not be surprised to hear that these partnerships already exist; so looking forward to any good news folks in the community can share about that.
For anyone interested, I got this in my ChatGPT Pulse today, a draft of how the EU is preparing to handle deepfakes, technically (marking, detection, labeling). Sharing it here since it connects directly to the questions in this thread. For more information you can visit the sites that are referred to in that draft.