Young girls are once again being put through the nightmare of having their images used to create child sexual abuse material (CSAM). The use of generative AI has reinvigorated the "Stranger Danger" lessons taught to children by parents, TV shows, and teachers. The reality is that the vast majority of child abuse is committed by someone known to the child.
In a particularly disturbing case, an underage actor had their image used to create undressed images using AI tool Grok. Weeks earlier, a 13-year-old girl was expelled from school after someone created deepfake porn of her. The internet has become a breeding ground for CSAM, with over 3,500 images found on a dark web forum in July.
Generative AI "learns" by comparing and updating patterns in the data it's been trained on. While some companies claim to have safeguards in place, these are insufficient, and some have even made their platforms open source. This means anyone can access the code and create their own CSAM generator.
The lack of regulation is a major concern. In China, AI content must be labelled, but Denmark is working on legislation that would give citizens copyright over their images and voices. The US government has shown little interest in regulating generative AI, despite executive orders against it.
One possible solution is imposing liability on companies that enable the creation of CSAM. A New York law, the Raise Act, holds AI companies accountable for past harms, while a California bill says they can be held liable after a certain point.
However, some experts believe more immediate action is needed. A tool has been developed to detect and notify people when their images or creative work are being scraped. The key to protecting young people lies in the public demanding that companies be held accountable for allowing CSAM creation.
Legislation, technological safeguards, and raising awareness among parents are crucial to preventing child endangerment and harassment. It's time to prove that we're committed to keeping our children safe online.
In a particularly disturbing case, an underage actor had their image used to create undressed images using AI tool Grok. Weeks earlier, a 13-year-old girl was expelled from school after someone created deepfake porn of her. The internet has become a breeding ground for CSAM, with over 3,500 images found on a dark web forum in July.
Generative AI "learns" by comparing and updating patterns in the data it's been trained on. While some companies claim to have safeguards in place, these are insufficient, and some have even made their platforms open source. This means anyone can access the code and create their own CSAM generator.
The lack of regulation is a major concern. In China, AI content must be labelled, but Denmark is working on legislation that would give citizens copyright over their images and voices. The US government has shown little interest in regulating generative AI, despite executive orders against it.
One possible solution is imposing liability on companies that enable the creation of CSAM. A New York law, the Raise Act, holds AI companies accountable for past harms, while a California bill says they can be held liable after a certain point.
However, some experts believe more immediate action is needed. A tool has been developed to detect and notify people when their images or creative work are being scraped. The key to protecting young people lies in the public demanding that companies be held accountable for allowing CSAM creation.
Legislation, technological safeguards, and raising awareness among parents are crucial to preventing child endangerment and harassment. It's time to prove that we're committed to keeping our children safe online.