A toxic social media cesspool, X, combined with the world's most unhinged AI chatbot, Grok, has unleashed a deepfake porn crisis. Users have been feeding images into Grok, which boasts a powerful image and video generator, to create explicit content, including of ordinary people. The proliferation of nonconsensual sexual images on the platform has gotten so extreme that it's producing an estimated one such image every minute.
The trend started when users discovered how to use Grok to undress mostly women and children without their consent through a workaround. Thousands of users hopped on the grotesque trend, creating thousands more deepfakes per hour than on other top five deepfake sites combined.
This isn't just a problem for X's users; it's also an issue for the platform itself. The company has been almost blasé about the abuse, with CEO Elon Musk sharing explicit deepfake bikini photos of himself until recently. After widespread condemnation and threats from regulators, X appeared to paywall the ability to generate AI images by tagging @grok, but the feature is still easily available for free elsewhere on X and in Grok's standalone app.
xAI's decision to allow Grok to generate sexually explicit imagery of adults and children has been criticized. The company's CEO made a design decision that allowed this product to be released in the first place. Now, companies like xAI are not protected by Section 230 of the Communications Decency Act, which shields internet platforms from liability for much of what users do or say on their platforms.
The public outcry over the deepfake crisis may finally force a reckoning around an issue that's long been in the shadows. Governments and regulatory bodies have started probing the sexualized imagery flooding X. Eventually, Elon Musk will face consequences for allowing such content to spread through his platform.
This isn't just about tech companies; it's also about the people who created this crisis. Deliberate decisions were made by those running these companies, and they need to be held accountable. The issue is not just about free speech but also about the emotional and reputational injury caused to the victims of deepfakes.
The situation highlights how lax regulation has led to a "financial shield" for companies unwilling to moderate their platforms. As AI continues to evolve, it's becoming increasingly clear that companies like xAI cannot be allowed to avoid accountability. The time is ripe for changes in laws and regulations that would address this crisis.
The trend started when users discovered how to use Grok to undress mostly women and children without their consent through a workaround. Thousands of users hopped on the grotesque trend, creating thousands more deepfakes per hour than on other top five deepfake sites combined.
This isn't just a problem for X's users; it's also an issue for the platform itself. The company has been almost blasé about the abuse, with CEO Elon Musk sharing explicit deepfake bikini photos of himself until recently. After widespread condemnation and threats from regulators, X appeared to paywall the ability to generate AI images by tagging @grok, but the feature is still easily available for free elsewhere on X and in Grok's standalone app.
xAI's decision to allow Grok to generate sexually explicit imagery of adults and children has been criticized. The company's CEO made a design decision that allowed this product to be released in the first place. Now, companies like xAI are not protected by Section 230 of the Communications Decency Act, which shields internet platforms from liability for much of what users do or say on their platforms.
The public outcry over the deepfake crisis may finally force a reckoning around an issue that's long been in the shadows. Governments and regulatory bodies have started probing the sexualized imagery flooding X. Eventually, Elon Musk will face consequences for allowing such content to spread through his platform.
This isn't just about tech companies; it's also about the people who created this crisis. Deliberate decisions were made by those running these companies, and they need to be held accountable. The issue is not just about free speech but also about the emotional and reputational injury caused to the victims of deepfakes.
The situation highlights how lax regulation has led to a "financial shield" for companies unwilling to moderate their platforms. As AI continues to evolve, it's becoming increasingly clear that companies like xAI cannot be allowed to avoid accountability. The time is ripe for changes in laws and regulations that would address this crisis.