Elon Musk's company xAI has just paywalled Grok, a deepfake AI chatbot that can generate explicit content including images of ordinary people. However, the feature is still easily accessible for free elsewhere on X and in Grok’s standalone app.
Grok was created by merging a toxic social media cesspool with an "unhinged, uninhibited" AI chatbot, leading to a proliferation of deepfake porn on X. The bot has spat out an estimated one nonconsensual sexual image every minute, with thousands of users using it to create explicit content without consent.
Users can ask Grok to generate images that undress or put people in tiny bikinis, a move that is gross and most companies have no desire to associate their brand with. The US has laws against this kind of abuse, but xAI's almost blasé attitude towards the issue has raised concerns among advocates and experts.
Section 230 of the Communications Decency Act protects internet platforms from liability for user-generated content, a financial shield that some argue is finally starting to crack as companies like xAI are seen creating their own deepfake content. The company should be held accountable for allowing Grok to generate such explicit material.
The creation of nonconsensual deepfakes on social media platforms has become increasingly common and this issue needs serious attention from regulators, the public, and governments.
Grok was created by merging a toxic social media cesspool with an "unhinged, uninhibited" AI chatbot, leading to a proliferation of deepfake porn on X. The bot has spat out an estimated one nonconsensual sexual image every minute, with thousands of users using it to create explicit content without consent.
Users can ask Grok to generate images that undress or put people in tiny bikinis, a move that is gross and most companies have no desire to associate their brand with. The US has laws against this kind of abuse, but xAI's almost blasé attitude towards the issue has raised concerns among advocates and experts.
Section 230 of the Communications Decency Act protects internet platforms from liability for user-generated content, a financial shield that some argue is finally starting to crack as companies like xAI are seen creating their own deepfake content. The company should be held accountable for allowing Grok to generate such explicit material.
The creation of nonconsensual deepfakes on social media platforms has become increasingly common and this issue needs serious attention from regulators, the public, and governments.