The EU is investigating Grok and X over potentially illegal deepfakes

European Regulators Probe X Amid Allegations of AI-Generated Child Abuse Material

The European Commission has launched an investigation into Elon Musk's social media platform X over allegations that it failed to take adequate measures to prevent the spread of AI-generated child abuse material (CSAM). The probe follows a similar inquiry into Grok, another AI-powered tool deployed by X.

According to regulators, X allegedly neglected its legal obligations under the Digital Services Act, leaving European citizens vulnerable to exploitation. Specifically, the commission claims that X's decision to utilize Grok on its platform allowed manipulated sexually explicit images, including potentially CSAM material, to disseminate freely.

"This is a violent and unacceptable form of degradation," said Henna Virkkunen, the Commission's executive VP. "We will determine whether X has met its legal obligations or treated European citizens as collateral damage."

Commission officials stated that they will assess whether X took sufficient steps to mitigate risks associated with Grok's deployment, including manipulated CSAM content. They argue that these risks materialized, causing harm to EU citizens.

This investigation comes on the heels of a 140 million euro ($120 million) fine levied against X by the European Commission in 2023 for breaching its Digital Services Act. Musk has publicly criticized the EU's actions, describing it as "the fourth Reich" and advocating for its abolition.

In response to the new inquiry, X reiterated its commitment to creating a safe platform, stating that it has zero tolerance for child exploitation, non-consensual nudity, and unwanted sexual content.

As tensions between Europe and American tech companies escalate, this investigation raises questions about the responsibility of social media platforms in regulating AI-generated content.
 
๐Ÿ˜• I don't know man... this whole thing is like a real-life video game from back in the day when you had to sneak past guards without getting caught ๐Ÿ˜…. The idea that some platform could just let AI-generated CSAM material spread like wildfire and not take responsibility for it? It's wild ๐Ÿคฏ.

I mean, I get where they're coming from - these platforms are supposed to be safe spaces for everyone, right? But at the same time, we can't just blame one company for everything. It's like, what about all the other stuff that happens on those platforms too? The cat videos, the memes... do we really need a special task force to monitor all of that? ๐Ÿคฃ.

I guess what I'm saying is that it's complicated, you know? These companies are trying their best, but they're also human beings (or in Musk's case, a billionaire with a big ego ๐Ÿ˜‚). We can't just expect them to be perfect all the time. But at the same time, we need to make sure they're doing enough to protect us.

It's like, what's the right balance here? I don't know, man... maybe that's why they call it "regulation" - so that we can all agree on what needs to be done ๐Ÿค”.
 
๐Ÿค• just thinking about all those innocent kids being exploited online is heartbreaking... as a platform owner ourselves, it's our responsibility to prioritize user safety above profits ๐Ÿค‘๐Ÿ’ธ. I mean, what's the point of having rules if we're not gonna enforce them? ๐Ÿ’ฏ European regulators are doing their job by investigating X and making sure they take adequate measures to prevent child abuse material from spreading on their platform ๐Ÿšซ.

It's crazy how Elon Musk responded to the previous fine ๐Ÿค‘ - calling it "the fourth Reich"... like, dude, you need to take responsibility for your actions instead of blaming everyone else ๐Ÿ™„. X is already getting slammed by regulators, and now they're facing another investigation ๐Ÿ˜ฌ. This is a wake-up call for all of us in the tech industry to take online safety seriously and create safer platforms for our users ๐ŸŒŸ
 
๐Ÿค” I think this is super worrying, you know? Social media platforms have to do way more than just report CSAM material - they need to actually prevent it from spreading in the first place ๐Ÿšซ. It's not like X can just blame everything on Grok and hope for the best... although, honestly, how did a tool like that even end up on their platform in the first place? ๐Ÿ’ป The whole thing is just so frustrating because we're all stuck waiting to see what happens next ๐Ÿ‘€. And meanwhile, Elon Musk's still out there tweeting about how the EU is being "totalitarian" ๐Ÿ™„... like, dude, you can't just dismiss concerns about child exploitation and expect everything to be okay ๐Ÿคทโ€โ™€๏ธ. Something needs to change here ๐Ÿ”„.
 
omg can't believe what's going on with X rn ๐Ÿคฏ๐Ÿ’” they're really putting ppl at risk by not taking CSAM seriously enough... like how hard is it to detect manipulated images? ๐Ÿคทโ€โ™€๏ธ i feel bad for the victims and their families ๐Ÿ˜ข. EU regulators gotta keep an eye on these tech giants, can't let them just do whatever they want ๐Ÿ’โ€โ™‚๏ธ. X needs to step up its game and make sure it's not facilitating the spread of abuse material ๐Ÿ‘€.
 
I'm low-key worried about these AI-generated CSAM images spreading like wildfire on X. ๐Ÿคฏ It's one thing to say you have a zero-tolerance policy but when stuff slips through the cracks and Europeans get hurt, that's a whole different story. The EU's probe is definitely needed here. I mean, we can't keep letting American tech giants do their own thing without some serious oversight. ๐Ÿ’ธ It's like they think they're above the law or something? As someone who's lived in Europe all my life, it's frustrating to see how vulnerable we are to exploitation when it comes to online safety. We need stricter regulations and more accountability from these big tech companies. ๐Ÿšซ
 
I'm shocked that X is getting away with this ๐Ÿคฏ. I mean, if Musk wants to play it like he's above EU laws, then maybe he shouldn't be hosting his platform on European servers. It's like trying to outsource your problems and then expecting the host country to clean up after you.

The EU's Digital Services Act is clear: social media platforms need to take responsibility for the content they host. And if X can't do that, then maybe it's time to rethink its strategy. The fact that Musk has been attacking the EU's actions is just deflecting from his own platform's failures ๐Ÿ™„.

This investigation raises some serious questions about the role of social media in our society. Are these platforms really doing enough to protect us from exploitation? And if not, who's responsible for fixing it? The answer isn't always going to be a simple one, but one thing is clear: X needs to step up its game if it wants to stay on good terms with the EU.
 
Back
Top