"Apple and Google's App Stores are Failing to Protect Kids from Nonconsensual AI 'Nudify' Apps"
A recent investigation by the Tech Transparency Project (TTP) has revealed that Apple's App Store and Google Play Store continue to host dozens of apps that can be used to create nonconsensual and sexualized images, violating both companies' store policies. These "nudify" apps, often referred to as deepfakes, have been downloaded over 700 million times and generated more than $117 million in revenue.
The TTP found that these apps were approved for children as young as four or nine years old, despite being in direct violation of company policy. For example, the app DreamFace is rated suitable for ages 13 and up on Google Play and ages nine and up on Apple App Store. However, according to the investigation, all of these apps contain content that can be used to create nonconsensual images.
Both companies have responded to the investigation, with Apple removing 24 apps from its store and Google suspending several apps referenced in the report for violating store policies. However, neither company has taken adequate action to address the issue, considering the scale of the problem.
The investigation also highlights concerns about AI chatbots like Grok, which was developed by Elon Musk's X Corporation. The app has been found to generate millions of sexualized images and thousands involving children over a period of just 11 days. While X Corporation has claimed that it takes action against illegal content on its platform, the company's response to allegations of child exploitation remains unclear.
The lack of transparency from both Apple and Google is alarming, particularly given their responsibility as gatekeepers of digital content. It is imperative that these companies take concrete steps to address this issue, including removing these apps from their stores and implementing robust moderation policies to prevent similar incidents in the future.
A recent investigation by the Tech Transparency Project (TTP) has revealed that Apple's App Store and Google Play Store continue to host dozens of apps that can be used to create nonconsensual and sexualized images, violating both companies' store policies. These "nudify" apps, often referred to as deepfakes, have been downloaded over 700 million times and generated more than $117 million in revenue.
The TTP found that these apps were approved for children as young as four or nine years old, despite being in direct violation of company policy. For example, the app DreamFace is rated suitable for ages 13 and up on Google Play and ages nine and up on Apple App Store. However, according to the investigation, all of these apps contain content that can be used to create nonconsensual images.
Both companies have responded to the investigation, with Apple removing 24 apps from its store and Google suspending several apps referenced in the report for violating store policies. However, neither company has taken adequate action to address the issue, considering the scale of the problem.
The investigation also highlights concerns about AI chatbots like Grok, which was developed by Elon Musk's X Corporation. The app has been found to generate millions of sexualized images and thousands involving children over a period of just 11 days. While X Corporation has claimed that it takes action against illegal content on its platform, the company's response to allegations of child exploitation remains unclear.
The lack of transparency from both Apple and Google is alarming, particularly given their responsibility as gatekeepers of digital content. It is imperative that these companies take concrete steps to address this issue, including removing these apps from their stores and implementing robust moderation policies to prevent similar incidents in the future.