Amazon receives over 1 million AI-related child sexual abuse material reports in 2025, but refuses to disclose source of the content.
The National Center for Missing and Exploited Children (NCMEC) reported more than one million cases of AI-related child sexual abuse material (CSAM) in 2025, with Amazon being responsible for a vast majority of these reports. According to an investigation by Bloomberg, Amazon found the material in its training data but refused to disclose where it came from.
Amazon attributed the high volume of CSAM reports to the fact that it uses third-party sources to train its AI services and warned that it cannot provide further details about the source of the content due to concerns over diluting the efficacy of other reporting channels. This means that Amazon is unable to pass on any actionable information to law enforcement.
The NCMEC's CyberTipline, where companies are legally required to report suspected CSAM, received a high volume of AI-related reports from various companies in 2025, but with one notable exception: Amazon's reports proved to be "inactionable" due to the lack of source information.
Amazon has since taken steps to improve its detection and removal of CSAM content. However, this raises concerns over how the company is handling sensitive data. The NCMEC is urging AI companies to take greater responsibility for preventing CSAM on their platforms.
The growing issue of CSAM in AI chatbots and models is becoming increasingly alarming, with several high-profile cases involving teenagers who took their own lives after being exposed to abusive content generated by these platforms. Meta, OpenAI, and Character.AI are among the companies facing lawsuits related to these incidents.
In response to the rising concerns, Amazon emphasized its commitment to preventing CSAM across all of its businesses. However, experts are warning that more needs to be done to ensure AI models are designed with robust safeguards to prevent the spread of abusive content.
The National Center for Missing and Exploited Children (NCMEC) reported more than one million cases of AI-related child sexual abuse material (CSAM) in 2025, with Amazon being responsible for a vast majority of these reports. According to an investigation by Bloomberg, Amazon found the material in its training data but refused to disclose where it came from.
Amazon attributed the high volume of CSAM reports to the fact that it uses third-party sources to train its AI services and warned that it cannot provide further details about the source of the content due to concerns over diluting the efficacy of other reporting channels. This means that Amazon is unable to pass on any actionable information to law enforcement.
The NCMEC's CyberTipline, where companies are legally required to report suspected CSAM, received a high volume of AI-related reports from various companies in 2025, but with one notable exception: Amazon's reports proved to be "inactionable" due to the lack of source information.
Amazon has since taken steps to improve its detection and removal of CSAM content. However, this raises concerns over how the company is handling sensitive data. The NCMEC is urging AI companies to take greater responsibility for preventing CSAM on their platforms.
The growing issue of CSAM in AI chatbots and models is becoming increasingly alarming, with several high-profile cases involving teenagers who took their own lives after being exposed to abusive content generated by these platforms. Meta, OpenAI, and Character.AI are among the companies facing lawsuits related to these incidents.
In response to the rising concerns, Amazon emphasized its commitment to preventing CSAM across all of its businesses. However, experts are warning that more needs to be done to ensure AI models are designed with robust safeguards to prevent the spread of abusive content.