Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

Amazon receives over 1 million AI-related child sexual abuse material reports in 2025, but refuses to disclose source of the content.

The National Center for Missing and Exploited Children (NCMEC) reported more than one million cases of AI-related child sexual abuse material (CSAM) in 2025, with Amazon being responsible for a vast majority of these reports. According to an investigation by Bloomberg, Amazon found the material in its training data but refused to disclose where it came from.

Amazon attributed the high volume of CSAM reports to the fact that it uses third-party sources to train its AI services and warned that it cannot provide further details about the source of the content due to concerns over diluting the efficacy of other reporting channels. This means that Amazon is unable to pass on any actionable information to law enforcement.

The NCMEC's CyberTipline, where companies are legally required to report suspected CSAM, received a high volume of AI-related reports from various companies in 2025, but with one notable exception: Amazon's reports proved to be "inactionable" due to the lack of source information.

Amazon has since taken steps to improve its detection and removal of CSAM content. However, this raises concerns over how the company is handling sensitive data. The NCMEC is urging AI companies to take greater responsibility for preventing CSAM on their platforms.

The growing issue of CSAM in AI chatbots and models is becoming increasingly alarming, with several high-profile cases involving teenagers who took their own lives after being exposed to abusive content generated by these platforms. Meta, OpenAI, and Character.AI are among the companies facing lawsuits related to these incidents.

In response to the rising concerns, Amazon emphasized its commitment to preventing CSAM across all of its businesses. However, experts are warning that more needs to be done to ensure AI models are designed with robust safeguards to prevent the spread of abusive content.
 
like i'm not surprised tbh πŸ™„ amazon's lack of transparency on this whole thing is just mind-blowing... like what even is the point of having a third-party report system if you're not gonna give any details? πŸ€” and meanwhile, companies are all like "oh no, our AI is bad" but really it's just a case of being lazy and not wanting to deal with the messy details πŸ˜’ anyway, glad they're taking steps to improve detection (i guess) but i'm still waiting for some real accountability on this one πŸ€·β€β™€οΈ
 
OMG 🀯 like what?! 1 million reports of child sexual abuse material on Amazon? That's just crazy πŸ™…β€β™€οΈ and I'm not surprised they won't tell us where it came from, that sounds super dodgy 😳. But at the same time, I get why they're being all vague about it, I mean we don't want to compromise the whole reporting system, right? πŸ€”

I just wish Amazon would be more transparent about this stuff, you know? Like, what are they actually doing to stop it from happening in the first place? Are they working with law enforcement or something? πŸš” And what's up with all these AI chatbots and models that can generate super bad content? It's like, how do we even regulate this stuff? 🀯

Anyway, I'm just really worried about these cases of teenagers who took their own lives after seeing that kind of stuff. That's just heartbreaking 😭. We need to do something to prevent this from happening more often, you know? Like, we need to make sure AI companies are taking responsibility for what they're creating and how it affects people 🀝.

Oh, and can someone explain to me how Amazon is using third-party sources to train their AI services again? πŸ€” I just don't get it...
 
🚨 this is getting out of hand, like literally 1 million reports and amazon just says it's a third-party thing πŸ€” i don't care about the details but how can they just sweep it under the rug like that? isn't it their responsibility to know what's in their training data? πŸ™„ i mean i get it, you wanna prevent more reporting channels from getting flooded but shouldn't amazon be the one taking ownership of this? πŸ’― and yeah, the ncmeC is right on point too... these companies need to step up their game and design better safeguards for their models πŸ”’
 
I'm literally shook by this news 🀯... I mean, who knew our favorite online shopping buddies were also responsible for hosting a bunch of super messed up child sexual abuse material on their servers? πŸ™…β€β™‚οΈ It's wild that they won't disclose where it came from, but at the same time, I get why they can't - it could mess with other reporting channels and stuff.

But, like, what's even more alarming is how AI chatbots and models are becoming a breeding ground for this kinda content. πŸ€– It's crazy to think that these platforms can be created to spread such hate and abuse... I mean, we're basically giving them permission to do so.

We need companies like Amazon and others to step up their game when it comes to stopping CSAM - they gotta take responsibility for making sure their AI models are safe and secure. And what's with the lawsuits against Meta, OpenAI, and Character.AI? They should get some serious scrutiny too! πŸ€”
 
I'm really worried about this πŸ€•... I mean, a million reports of child abuse material from AI is just too much. Amazon's not being transparent enough about where it came from tho πŸ€”... like, if they knew it was coming from third-party sources, shouldn't they be able to share that info? πŸ€·β€β™‚οΈ It's not like they're getting these reports for free, right? πŸ’Έ

And yeah, I get why they don't wanna pass on source info to the authorities, but that just means we can't really do anything about it. 🚫 It's like they're saying "we found this stuff, but you'll never know where it came from" πŸ€·β€β™‚οΈ... not cool, Amazon πŸ˜’

I'm all for them improving their detection and removal techniques tho πŸ’ͺ... that's a good start. But we need to talk about how these companies are handling sensitive data in the first place πŸ“š. They can't just keep it on the down-low and expect everything to be okay 😬.
 
πŸ€• This is so messed up... I mean, can you believe a company like Amazon, which has access to all this data, just won't spill the beans on where the CSAM is coming from? It's like they're trying to cover their tracks or something πŸ™…β€β™‚οΈ. And now we know that most of these reports are coming from third-party sources - that's a huge red flag! πŸ’” What if those companies aren't even reporting everything they find? Or what if the AI is picking up on stuff it shouldn't be seeing in the first place? πŸ€– It's all so concerning. Can't these big tech companies just be more transparent about how they're handling this stuff? πŸ€·β€β™€οΈ
 
omg this is so messed up 🀯 amazon is literally doing more harm than good here... think about it - they're getting all these CSAM reports but refusing to share where they came from πŸ˜’ that's like leaving a trail for bad ppl to follow and exploit even more kids... what kinda corporate responsibility is that?

i'm all for companies taking steps to improve detection and removal, but this whole thing just feels like a cover-up πŸ™…β€β™‚οΈ. the NCMEC should be able to trust amazon with sensitive info, right? but clearly not anymore πŸ€¦β€β™€οΈ

these high-profile cases involving teens who took their own lives after being exposed to AI-generated CSAM are absolutely devastating 😭 it's like we're living in a nightmare where our tech is more of a threat than our enemies... what's the point of having AI if it's just gonna perpetuate harm?
 
I mean... Amazon's got like a gazillion AI-related child sexual abuse material reports comin' in and they're all "can't disclose source" πŸ€”, but at the same time, I'm thinkin', what kinda training data are these third-party sources even gettin'? Shouldn't Amazon be like, super transparent about where this stuff came from? πŸ€·β€β™‚οΈ But then again, maybe they're tryin' to avoid stigmatizin' some poor kid's content or somethin'... idk. I mean, it's not like they can just ignore these reports and hope nobody gets hurt, right? πŸ’”

But Amazon's all "we're takin' steps" to improve detection and removal... that sounds great, but what about the source of the problem in the first place? Shouldn't we be holdin' companies accountable for how their AI models are designed and trained? 🀯 I mean, it's not like these companies are just throwin' up AI chatbots willy-nilly, they're usin' 'em to build all sorts of services... but still, more needs to be done, IMO. πŸ’ͺ
 
omg this is crazy like 1 mil cases? how does one even get that much data lol but seriously amazon's gotta step up their game here they can't just say "we don't know" about the source of the CSAM and then not do anything about it πŸ€”πŸš«
 
πŸ€” I'm torn about this... on one hand, it's crazy that Amazon got over a million reports and still won't spill the beans - like, how can you not know where your training data is coming from? πŸ€·β€β™‚οΈ But on the other hand, I get why they're being all secretive - if they reveal the source of the content, it might just flood all reporting channels and make it harder to actually do anything about it. πŸ’» It's like, Amazon's trying to protect itself from getting overwhelmed, but at the same time... shouldn't they be taking responsibility for letting this stuff go through their systems in the first place? πŸ€·β€β™‚οΈ I don't know, man...
 
this is getting crazy 🀯 amazon's AI model is literally eating up CSAM content and they're not even willing to say where it's coming from... like, what kind of training data are we talking about here? πŸ€” and then they have the nerve to say that disclosing the source info would mess with other reporting channels... no excuse for this! πŸ™„ the fact that companies are making so much money off these AI services but can't be bothered to keep their users safe is just not right πŸ˜’
 
πŸ€” This is a really concerning development, especially given that Amazon's refusal to disclose the source of the AI-related CSAM material has severely limited the ability of law enforcement to track down and prosecute those responsible. It's like they're creating a massive blind spot in the system! 🚫 I mean, can't we have transparency around this stuff? The lack of accountability from companies like Amazon is alarming, considering the devastating impact that AI-generated CSAM can have on young people's lives. We need to push for more robust safeguards and regulations around AI development, especially when it comes to preventing the spread of abusive content. πŸ“Š
 
I'm really worried about this πŸ€•. I mean, AI is supposed to help us, not put our kids in harm's way. It's crazy that Amazon can't even tell us where they got the CSAM from. Like, come on! They're just brushing it under the rug and hoping no one asks questions. This just shows how messed up the whole system is πŸ’”. We need to be holding companies like this accountable for what they do with our data. It's not okay that they can't even provide basic info about where the content came from. We should all be taking a stand against this 🌟
 
Back
Top