Deepfake technology and AI in Pornography Target Children for Abuse, Coercion, Extortion,and Trauma
- SafeOC
- 3 days ago
- 2 min read

By Greg Mellen on April 27, 2025
“An offender can take a picture of your child from a soccer game and convert it into a child (pornographic) video.” — Corrine St. Thomas Stowers, former Supervising Intelligence Analyst
The term “deepfake” was coined in 2017 by an internet user who helped spark a particularly onerous development in virtual technology. By using open-source images of celebrities and inserting them into explicit images and videos, a new and lucrative industry was launched.
Since then, with the rapid growth of artificial intelligence technology, or AI, it has only become worse. According to Italian technology company Deeptrace, pornography in 2019 accounted for 96 percent of deepfake videos online.
Sadly, children have become prime victims of abuse, enticement, exploitation, coercion, and so-called sextortion.
Getting the word out
April is National Child Abuse Prevention Month, and in recognition, SafeOC, a local site that focuses on safety issues, is working with experts to raise awareness about the dangers of online child exploitation.
SafeOC highlighted the issue in its April newsletter in collaboration with Corrine St. Thomas Stowers, a professor and former supervising intelligence analyst. Safe OC will continue to provide updates to spread awareness and encourage vigilance.
“Child sex abuse material has gone up exponentially and horrifically,” St. Thomas Stowers said.
And that was before advances in AI became widely available.
St. Thomas Stowers, formerly a cyber analyst in the Exploited Children’s Division with the National Center for Missing and Exploited Children (NCMEC), has seen the explosion firsthand. The NCMEC, which records tips of potential online child exploitation, reports that the group’s cyber tipline hit a record 36.2 million complaints last year.
Comments