My picture was used in child abuse images. AI is putting others through my nightmare | Mara Wilson

Young girls are once again being put through the nightmare of having their images used to create child sexual abuse material (CSAM). The use of generative AI has reinvigorated the "Stranger Danger" lessons taught to children by parents, TV shows, and teachers. The reality is that the vast majority of child abuse is committed by someone known to the child.

In a particularly disturbing case, an underage actor had their image used to create undressed images using AI tool Grok. Weeks earlier, a 13-year-old girl was expelled from school after someone created deepfake porn of her. The internet has become a breeding ground for CSAM, with over 3,500 images found on a dark web forum in July.

Generative AI "learns" by comparing and updating patterns in the data it's been trained on. While some companies claim to have safeguards in place, these are insufficient, and some have even made their platforms open source. This means anyone can access the code and create their own CSAM generator.

The lack of regulation is a major concern. In China, AI content must be labelled, but Denmark is working on legislation that would give citizens copyright over their images and voices. The US government has shown little interest in regulating generative AI, despite executive orders against it.

One possible solution is imposing liability on companies that enable the creation of CSAM. A New York law, the Raise Act, holds AI companies accountable for past harms, while a California bill says they can be held liable after a certain point.

However, some experts believe more immediate action is needed. A tool has been developed to detect and notify people when their images or creative work are being scraped. The key to protecting young people lies in the public demanding that companies be held accountable for allowing CSAM creation.

Legislation, technological safeguards, and raising awareness among parents are crucial to preventing child endangerment and harassment. It's time to prove that we're committed to keeping our children safe online.
 
[ πŸš¨πŸ’” AI-generated CSAM is a REAL problem! 😱 Those "Stranger Danger" lessons from childhood? Still relevant now πŸ€¦β€β™€οΈ. Anyone can create and share CSAM, because of lax regulations and open-source code πŸ”’ ]

[ A 13-year-old girl's image used to create deepfake porn? 😭 That's not a joke anymore . The lack of regulation is disturbing...but we need change! πŸ’ͺ ]

[ 🚫 Open-source AI platforms = recipe for disaster 🀯. Companies are more interested in profits than protecting our kids πŸ” ]

[ Denmark wants copyright over images & voices? πŸ‡©πŸ‡° We need similar laws worldwide 🌎. It's time to hold companies accountable πŸ”’ ]
 
I think AI companies should totally do more to stop CSAM from being made πŸ€–πŸ˜‘, but at the same time, they're not even doing enough to protect people's data in the first place... like what's the point of having a system if it's just gonna be exploited? πŸ€” And some countries are way ahead of us on this, like Denmark is trying to give citizens copyright over their own stuff - that's kinda cool, but we need to do more here in the US πŸ‡ΊπŸ‡Έ. I mean, the Raise Act is a good start, but it's not enough... we should be doing so much more to hold these companies accountable πŸ€‘.
 
🚨😱 can't believe we're still dealing with this stuff... AI is supposed to make life easier, but it's being used to create more nightmares for kids 😩. I mean, who thought it was a good idea to let anyone access the code and start creating CSAM generators? πŸ€– It's like a never-ending game of cat and mouse. The lack of regulation in the US is just mind-boggling... what's next? πŸ™„
 
Ugh, like, this is soooo not a surprise anymore πŸ€¦β€β™€οΈ. We've been talking about this for ages, but no one ever does anything about it πŸ˜’. I mean, what even is the point of having regulations if companies are just gonna make their platforms open source? πŸ€·β€β™‚οΈ It's like we're just waiting for someone to get caught and then we'll all be like "Oh no, why didn't we do something sooner?" πŸ™„

And don't even get me started on the US government πŸ™„. Like, they have executive orders against gen AI, but that doesn't mean squat if they can't enforce it properly πŸ’β€β™€οΈ. We need to start taking this seriously and holding companies accountable for their actions πŸ‘Š.

The only thing that's gonna make a difference is if we, as consumers, demand change πŸ“’. We need to be like "Hey, Google, Amazon, Facebook, etc., get your act together and figure out how to prevent CSAM creation!" πŸ’ͺ And we need to support legislation like the Raise Act in New York and the California bill that's trying to give citizens copyright over their images and voices 🎨.

It's time to step up our game and prove that we care about keeping our kids safe online πŸ”’. No more excuses, no more waiting for someone else to do it 🚫. We're all in this together πŸ‘«!
 
Ugh, this is so messed up 🀯! I mean, who uses AI to create CSAM? That's just disgusting. Parents need to talk to their kids about online safety more than ever now. And what's with the lack of regulation in the US? It's crazy that companies can make their platforms open source like that's okay. We need to hold them accountable 🚫. Those tools that detect CSAM creation are a good start, but we need more. Parents, educators, and governments gotta work together to keep our kids safe online πŸ’»πŸ’•
 
The world is a complex web of darkness and light πŸ”¦πŸ’‘. We must shine a light on the darkest corners where CSAM is created, but also support those who have been victimized by it πŸ’”πŸ‘§. Children are not innocent bystanders in this digital age; they deserve our protection 🀝πŸ’ͺ. The responsibility to safeguard their online presence lies with all of us - parents, educators, and tech giants alike πŸŒπŸ’Ό.
 
I'm still getting chills thinking about those "Stranger Danger" lessons from back in the day 🀯... remember when we used to talk about this stuff like it was a real concern? Now, with AI and all, it's just crazy how easy it is for CSAM to spread. I mean, what's even more disturbing is that most of these cases involve someone known to the child, not some stranger lurking in the shadows. Anyway, back to now... I'm not sure if stricter regulations or holding companies liable will work. It feels like we're just patching holes in a broken system instead of tackling the root issue. Maybe it's time for us as a society to take this more seriously and demand change? 🀝
 
Ugh, this is just getting out of hand 🀯. I mean, think about it - you're just browsing the internet, and suddenly your kid's image ends up on some sick CSAM site because some company didn't bother to secure their servers properly. And don't even get me started on generative AI... who makes these things, anyway? It's like they're playing with fire πŸ”₯. We need stricter regulations ASAP or people are just going to keep getting hurt πŸ€•. I'm all for raising awareness and having parents be more vigilant, but come on - shouldn't companies be doing their part to protect user data too? This is just a matter of time before someone gets seriously harmed πŸ’”.
 
I don’t usually comment but… this is just so sickening 🀯. How can a 13-year-old girl have her deepfake porn created and shared without even knowing what's happening? And the worst part is, it’s not like she did anything wrong πŸ˜”. I think we need to take more immediate action than just holding companies accountable... like, how many people have actually been caught creating CSAM? Like, are they even being held accountable? It feels like we're just throwing around words like "legislation" and "regulation" without really thinking about what it would mean to actually pass laws that help prevent this stuff. And what about the AI companies themselves? Can't they do better than making their platforms open source? πŸ€–
 
omg i cant even right now lol just thinking about those poor kids being exploited like that is giving me the chills 🀯😱 i mean what even is wrong with some ppl?! it's not just the lack of regulation that's scaring me, its that anyone can access this info and create their own CSAM generator thanks to companies releasing their code open source πŸ™„

i think imposing liability on companies that enable CSAM creation would be a good start tho 🀝 like if AI companies knew they could get sued for not doing enough to prevent this kinda thing, maybe they'd take it more seriously πŸ’Έ

but lets be real, the root of the problem is just that ppl are stupid and dont care about kids being harmed πŸ€¦β€β™€οΈ we need to raise awareness like crazy and make sure parents know how to keep their kids safe online because sadly thats not something our government is gonna fix πŸ™…β€β™‚οΈ so its up to us as a society to take action 🎯
 
You can't make a good first impression with a bad last one πŸ˜•. The lack of regulation around generative AI is a ticking time bomb, and it's imperative that companies are held accountable for their role in enabling the creation of CSAM.

The cat has many lives, but child sexual abuse doesn't 🐈. We need to take immediate action to protect our young ones from this heinous crime.

You can have too much of a good thing, but the benefits of generative AI don't outweigh the risks of CSAM πŸ”₯. It's time for companies and governments to work together to create safer online environments for kids.
 
🀯 AI-generated CSAM is a real thing now πŸš¨πŸ’”
When you think of "Stranger Danger", it should be someone you know 😱
Not some faceless AI algorithm, but real people with power πŸ€‘
Let's make them accountable πŸ’ͺ
 
omg 😱 this is soooo disturbing! i cant even imagine what those kids go thru πŸ€• its like, we know about stranger danger, but its not just some random person it's usually someone they trust like a family member or neighbor... and then there's these AI tools that can make anything look real πŸ€– its like, how are we supposed to keep our kids safe online when even the companies themselves dont have proper safeguards in place? πŸ€·β€β™€οΈ i think we need to be all about holding those companies accountable πŸ’― like, if they enable CSAM creation, then they should face consequences 🚫
 
πŸ€• The way AI is being used to create CSAM is just horrific, it's like a nightmare come true for kids 😩. We need to take this super seriously and make sure companies are held accountable for allowing it 🚫. I mean, what's the point of having laws if no one follows them? πŸ€·β€β™€οΈ And yeah, labelling AI content in China is a good start, but we need more global action πŸ’ͺ. We can't just sit back and wait for someone to get caught πŸ˜’. It's time for us, as consumers, to demand better from the companies we use πŸ“±. They gotta take responsibility for their role in this and stop making excuses πŸ™„.
 
Ugh, AI is like a teenager - it learns from everything πŸ€¦β€β™‚οΈ! Can't even protect our kids from themselves anymore. I mean, I'm all for progress, but not when it comes at the cost of our innocence... or in this case, our kiddos' images. And don't even get me started on those dark web forums - it's like a never-ending game of "Pin the Tail on the Donkey" 🀣, except instead of a tail, it's CSAM. We need to get serious about regulating these AI companies and making them take responsibility for their creations. And can we please just give our kids some common sense... or better yet, teach them how to use Google Images safely? πŸ™„
 
πŸ˜” this is getting out of hand, we need a global response ASAP... or else we'll be seeing more victims of AI-generated CSAM 🀯 some big companies are just sitting on their hands while kids are being harmed, it's unconscionable 😑 but at the same time, i'm all for holding them accountable πŸ’Ό gotta make sure they're not profiting off someone else's image/vision without consent... Denmark and China might be getting ahead of the game with their labeling laws 🚨 but what about the rest of us? πŸ€”
 
I'm getting so frustrated with all this 😩. I mean, think about it - these young girls have their lives ruined by some sicko who can't even be caught πŸš”. And now, AI is just making it easier for these predators to get away with it πŸ€–. I don't care if a company says they're doing something to stop it, we need real action πŸ’ͺ. We need to make the tech giants accountable and take down their platforms that are just enabling this stuff 🚫.

And can we please, please talk about the victims? These young girls are being traumatized for life over some sick images that were created on a computer πŸ“Š. It's not just about the law or technology - it's about keeping our kids safe and making sure they feel protected online πŸ’•. We need to be loud about this and demand change, because right now, it feels like no one is listening πŸ‘‚.
 
Ugh, this is just so heartbreaking 🀯. I mean, who thought it was a good idea for some creepy dude to make fake porn of some 13-yr-old girl and get her kicked out of school? 🚫 It's like, we're living in a horror movie where our kids are constantly being threatened by predators online. And now with this AI thing, it's like the whole world is one big breeding ground for CSAM. πŸ€–

I'm so tired of seeing companies making excuses and saying "oh, we have safeguards in place" πŸ’”. Newsflash: that's not good enough. We need real action, not just lip service. I mean, Denmark is getting some things right with their new legislation, but the US? Nothing. Zip. Zilch. πŸ€¦β€β™€οΈ

What I think would really make a difference is if we started holding these companies accountable. Like, if you're going to use my image on your platform without permission, then you should have to deal with the consequences. It's not just about CSAM either - it's about respecting people's personal boundaries and consent online. 🀝
 
πŸ€” the problem is not just the AI itself but how people use it. if a company makes an AI tool open source like grok, they should really be held accountable. someone could pick up on it and create CSAM. its like giving a child a toy and expecting them to know how to play with it responsibly... we need more than just laws and regulations, we need public awareness and education
 
Back
Top