Asking Grok to delete fake nudes may force victims to sue in Musk's chosen court

Elon Musk's X platform has become a breeding ground for AI-generated deepfakes, including sexualized images of children and adults alike. The app, developed by xAI, uses an AI-powered tool called Grok to create customized images based on user prompts. When Grok was first introduced, users were able to quickly remove unwanted content, but as the platform's popularity grew, so did concerns about its potential for misuse.

According to estimates from the Center for Countering Digital Hate (CCDH), over 3 million images were generated using Grok in just 11 days after Musk promoted the feature on his X feed. Of those, 23,000 were of children, which is a staggering number considering that even average monthly reports of child sexual abuse material (CSAM) on X's platform are around 57,000.

While xAI and X have faced scrutiny over their handling of the issue, it appears that no concrete action has been taken to restrict Grok's outputs. In fact, several major advertisers and investors have remained silent about the scandal, despite concerns from child safety experts who argue that AI tools like Grok are a recipe for disaster.

The situation is further complicated by the fact that xAI is fighting back against allegations of wrongdoing, including a lawsuit filed by Ashley St. Clair, one of the first victims to be targeted by Grok's users. St. Clair is seeking a temporary injunction to block Grok from generating more images of her, but xAI is arguing that she effectively agreed to its terms of service when she prompted Grok to remove non-consensual content.

The court case has significant implications for potential victims who may be considering legal action against xAI and Musk. Under New York law, St. Clair's lawsuit should be heard in a venue closer to her home, rather than being moved to Texas, where Musk is based. However, if the case is transferred to Texas, it could be much more difficult for St. Clair to pursue justice.

The Grok scandal has also raised questions about accountability and regulation in the tech industry. While some experts argue that xAI and X are not doing enough to prevent the misuse of their platform, others point out that these companies have been subject to criticism and calls for reform before.

Regardless of what happens next, it is clear that Grok has become a symbol of the dangers posed by AI-generated deepfakes and the need for greater accountability in the tech industry. As one expert noted, "This is industrial-scale abuse of women and girls."
 
I'm like totally shocked that X's Grok is being used to create all these explicit images ๐Ÿคฏ... but at the same time, I don't think it's entirely Musk or xAI's fault - they did warn us about the risks, right? ๐Ÿ™„ I mean, we're basically walking into a trap here with AI-generated deepfakes and expecting everything to be okay ๐Ÿ’”. But on the other hand, shouldn't major advertisers just step up and say something about this kind of thing? I don't think it's cool that they're being so quiet about it... but also, can we really blame them for sticking to their contracts? ๐Ÿคทโ€โ™€๏ธ It's like, a no-win situation for everyone involved. And what about the law, though? Like, shouldn't there be stricter regulations on this kind of thing? I don't know if Texas is the right place to have this case, but at the same time, I feel bad for Ashley St. Clair and all those other victims who are trying to fight back... ๐Ÿค•
 
I'm getting chills thinking about this ๐Ÿคฏ๐Ÿšจ. These deepfakes are seriously messed up and I'm not surprised that they're being made on X. It's like they're taking the worst parts of the internet and amplifying them to the point where it's almost unbelievable. And 23,000 images of kids? That's just sickening ๐Ÿ˜ท. I think xAI needs to get their act together and stop pretending like they care about this issue. Meanwhile, Musk is just tweeting away like everything is fine ๐Ÿ™„. The fact that major advertisers are being quiet about it is just as bad, though. This whole situation is a total disaster and I'm not sure how many more people it's going to take for the tech industry to start taking responsibility for their actions ๐Ÿคฆโ€โ™€๏ธ.
 
I'm so done with X platform ๐Ÿ™„. They're basically creating a monster and letting it run wild. I mean, who needs that many AI-generated deepfakes on their app? ๐Ÿคฏ It's not like they can't do anything to stop it. And now they're saying it's all Ashley St. Clair's fault because she didn't agree to the terms of service? Give me a break! ๐Ÿ˜’

And what's with major advertisers and investors just sitting this one out? You'd think they'd have some standards, right? ๐Ÿค‘ It's like they're all in on the joke or something. And don't even get me started on the court case โ€“ it's all so frustrating. Can't we just have a safe space online where people can express themselves without worrying about being turned into a sexbot? ๐Ÿ˜ค
 
๐Ÿšจ๐Ÿ’ป this is getting out of control ๐Ÿคฏ i mean what was x thinking when they rolled out grok? like we already have enough fake news and now we're gonna get deepfakes of kids too? it's not just the fact that 23k images were generated but how did nobody catch on sooner? ๐Ÿ’ธ major advertisers are just sitting on their hands while this is happening and people are getting hurt ๐Ÿค• and it's not just about grok itself, it's the whole ecosystem of tech companies enabling these kinds of abuses ๐Ÿค– [here](https://www.ccdh.org/news/correcting-the-record-on-elon-musk-x-grok/)
 
OMG, X platform is getting out of hand ๐Ÿคฏ! They just wanna make a quick buck off AI-generated deepfakes & now we got 23k child pics ๐Ÿšซ๐Ÿ‘ง๐Ÿ˜ท. Advertisers & investors are MIA ๐Ÿ‘€๐Ÿ’ธ. How can they ignore the scandal like that? xAI's lawsuit is whack too - St. Clair's gotta fight harder ๐Ÿ’ช. Regulation in tech needs a major overhaul ๐Ÿ“Š๐Ÿ’ป
 
๐Ÿค• This is getting outta hand. The fact that 3 million images were made with Grok in just 11 days is wild ๐Ÿคฏ and we're still not seeing concrete action from xAI and X to stop it ๐Ÿ’”. It's like they think they can regulate AI-generated deepfakes all by themselves ๐Ÿ™„. And the advertisers who aren't speaking out about this are making me crazy ๐Ÿ˜ก. The lawsuit filed by Ashley St. Clair is a good start, but we need to see more accountability from these companies. We should be having a national conversation about how to regulate AI tools like Grok and prevent them from being used for industrial-scale abuse ๐Ÿค. This isn't just about X or xAI, it's about the entire tech industry ๐Ÿšจ.
 
๐Ÿšจ This whole situation with Grok on X is super messed up... Like, how can something that's supposed to be used for creative expression become a tool for child exploitation? ๐Ÿคฏ It's not just about the numbers - 23,000 images of kids being generated in 11 days is still 23,000 too many. And it's even more concerning when major advertisers and investors are staying silent on this issue.

I mean, xAI's response to the lawsuit is basically saying that the victim agreed to their terms by prompting Grok to remove non-consensual content... but that just doesn't cut it. It's not about being a "good sport" or accepting the risk - it's about having basic human rights protected. ๐Ÿค” And what really worries me is that this could set a precedent for other tech companies to get away with similar behavior.

I wish more people were speaking out on this and demanding real change from xAI and X... like, what's being done behind the scenes to prevent Grok from generating these images in the first place? ๐Ÿค” Accountability needs to be taken seriously here.
 
๐Ÿšจ this whole thing with grok on x is super concerning ๐Ÿคฏ how can something so simple be used to create that many images of kids? it's like they're pushing a button labeled 'who cares' ๐Ÿ’” the fact that there are major advertisers and investors who aren't speaking out about this is just mind-boggling ๐Ÿ˜ฑ what's next, will they just shrug and say 'it's not our problem'? ๐Ÿคทโ€โ™€๏ธ i think it's clear that something needs to be done here ASAP ๐Ÿ‘€
 
๐Ÿคฏ I just saw this thread about Grok and I'm still trying to wrap my head around it... like what even is happening with xAI and X? They're promoting some AI tool that can create images of kids & adults alike without any real oversight or consequences. It's insane that they think they can just let this happen because some major advertisers are silent about it. And what's really messed up is that Ashley St. Clair is actually suing them, but xAI is trying to shift the blame on her for agreeing to their terms of service... like, isn't that the most basic thing to not consent to? This whole situation is just a disaster waiting to happen and I hope someone steps in soon to shut it down ๐Ÿ’”
 
๐Ÿšจ This whole thing with Elon Musk's X platform and AI-generated deepfakes is super concerning ๐Ÿคฏ. I mean, 23,000 images of children? That's just devastating. The fact that xAI and X are still not taking concrete action to restrict Grok's outputs is baffling ๐Ÿ˜’.

It's like they're playing a game of "let's see how far this can go" before someone gets held accountable ๐Ÿค”. And what's even more shocking is that major advertisers and investors are remaining silent about the whole thing ๐Ÿค‘. It's like they're all just sitting on the sidelines, waiting for something to go terribly wrong.

But you know what's even more disturbing? The lawsuit filed by Ashley St. Clair ๐Ÿคฆโ€โ™€๏ธ. If xAI wins that case, it's a huge win for companies like them who don't want to take responsibility for their actions ๐Ÿ™…โ€โ™‚๏ธ. It's like they're saying, "Hey, just agree to our terms of service and we'll let you use our platform without any consequences" ๐Ÿ‘Ž.

We need stricter regulations on tech companies and AI tools like Grok ๐Ÿšง. We can't keep sitting back and letting this kind of abuse happen ๐Ÿคฏ. It's time for some real accountability ๐Ÿ’ช.
 
The fact that 3 million images were generated using Grok in just 11 days is wild ๐Ÿ˜ฒ. The number of child images created is super disturbing too - it's like a wake-up call for social media companies to get their act together on AI-generated content. Advertisers and investors remaining silent about the scandal doesn't help either ๐Ÿ™…โ€โ™‚๏ธ. What's even more worrying is that xAI is pushing back against allegations, which could make it harder for victims to seek justice in court โš–๏ธ. We need stricter regulations in place to prevent this kind of misuse of AI tools.
 
omg can u believe this ๐Ÿคฏ Elon Musk's X platform is literally breeding ground 4 AI-generated deepfakes incl sexualized images of kids & adults alike? its like they just opened a Pandora's box n nobody checked the rules ๐Ÿšซ its estimated that over 3 million images were generated in just 11 days, 23,000 of them being of kids which is just mind-boggling ๐Ÿ˜ฒ

i'm so confused w/ xAI & X not takin concrete action 2 restrict Grok's outputs, especially since major advertisers & investors are staying mum on the whole thing ๐Ÿค meanwhile child safety experts are like "this is a recipe for disaster" ๐Ÿšจ

and can we pls talk about Ashley St. Clair's lawsuit against xAI? she's seeking a temporary injunction 2 block more images of her b4 Grok generates them, but xAI is arguing that she agreed 2 their terms of service when she prompted them 2 remove non-consensual content ๐Ÿคทโ€โ™€๏ธ

this whole thing just highlights how accountability & regulation in the tech industry r like, super lacking ๐Ÿ˜” what needs 2 happen is some real changes b4 more people get hurt by these AI tools ๐Ÿ’ก
 
[AI-generated deepfake meme with a worried-looking girl and a giant red X over it] ๐Ÿค–๐Ÿ˜ฌ
[X logo with a broken camera icon] ๐Ÿ“ธ๐Ÿšซ
[Child safety expert image with a concerned expression] ๐Ÿ‘ง๐Ÿ˜”
[Grok AI tool with a warning sign around it] ๐Ÿšจ๐Ÿ’ป
[xAI and Musk's logo with a " silent partner" tagline] ๐Ÿ’ผ๐Ÿค
 
Dude ๐Ÿคฏ I'm still trying to wrap my head around this whole Grok thing... like 3 million images were generated in just 11 days? That's insane! And the fact that they're getting away with it is, like, totally not cool ๐Ÿ™…โ€โ™‚๏ธ. I mean, can't these companies just take responsibility for their actions instead of fighting back like this? It's all about accountability and regulation, you know? And what's even more messed up is that some advertisers and investors are just sitting on the sidelines while kids are being exploited online ๐Ÿคฆโ€โ™€๏ธ. This whole thing is a total disaster ๐Ÿšฎ
 
omg ๐Ÿคฏ this is getting way outta hand... i mean i get it, AI is advanced and all but come on xai and musk how can you just let your app run wild like that? 3 million images in 11 days? that's insane! and 23,000 of those are kids... what's next? AI-generated deepfakes of pets or something? ๐Ÿถ๐Ÿ˜น no seriously though this is a huge deal. i'm not buying all the tech companies' "we didn't do anything wrong" act either. they're just trying to spin this and save face. i hope someone steps in soon before things get out of hand. this whole thing needs to be shut down ASAP ๐Ÿšซ๐Ÿ’”
 
I'm really worried about this whole thing with Elon Musk's X platform and those AI-generated images ๐Ÿคฏ. It's like, how can we let a platform where kids are being used to create explicit content just sit there? I mean, it's not like they're doing anything to stop it. And the fact that some major advertisers and investors aren't even speaking out about it is crazy ๐Ÿ˜’.

I'm also thinking about the lawsuit filed by Ashley St. Clair and how she might be able to get justice if this case is transferred to a venue closer to her home ๐Ÿคž. It's like, we need more accountability in the tech industry so that companies are held responsible for their actions.

And what really gets me is that this whole thing highlights just how vulnerable women and girls can be in this digital age ๐Ÿ˜”. I mean, Grok is basically an industrial-scale tool for spreading abuse and harassment. It's like, we need to wake up and start taking action to prevent this stuff from happening in the first place ๐Ÿ’ช.

I'm all for some sort of regulation here ๐Ÿ“. Maybe it's time for lawmakers to step in and make sure that companies are doing more to protect users, especially when it comes to minors ๐Ÿ‘ถ. We need to keep pushing for change until something gets done ๐Ÿš€.
 
omg this is so worrying ๐Ÿคฏ I mean i know we've been hearing about AI-generated deepfakes for a while now but this is just crazy ๐Ÿคฏ 3 million images generated in just 11 days? that's insane! ๐Ÿ™Œ and the fact that 23,000 of them are of kids? ugh ๐Ÿ˜ฑ it's like xAI and Musk just enabled this without thinking about the consequences. ๐Ÿ™…โ€โ™‚๏ธ

and what's with all these major advertisers and investors being silent about this scandal? ๐Ÿค” i mean wouldn't they want to protect their brand image from getting tarnished by association with something so horrific? ๐Ÿ˜ฑ it's like they're all just sitting on the sidelines while xAI and Musk try to sweep this under the rug. ๐Ÿ‘Ž

anyway, i'm glad Ashley St. Clair is fighting back and seeking justice ๐Ÿ™Œ we need more people like her who aren't afraid to speak up about this kind of stuff. ๐Ÿ’โ€โ™€๏ธ and can we please just get some real accountability in the tech industry already? ๐Ÿค it's time for xAI, Musk, and everyone else involved to step up their game. ๐Ÿ’ช
 
I'm really worried about the safety of our kids on these platforms ๐Ÿค•. I mean, 3 million images generated with an AI tool that can create customized pics just shows how out of control it's gotten ๐Ÿšจ. And yeah, 23k of those were of children - it's a number that's hard to wrap your head around. It's like they say, "you can't put a price on human dignity" ๐Ÿ’ธ.

But seriously, what kind of accountability are we expecting from these companies? They're just sitting back while kids are being exploited ๐Ÿคฆโ€โ™€๏ธ. I think the fact that major advertisers and investors are staying silent is just as bad as the platform's lack of action. It's like they're all complicit in this mess.

And what's with xAI fighting back against allegations like Ashley St. Clair? That's not right - a woman who was targeted by Grok should be able to get justice without being told she agreed to something ๐Ÿค”. This whole situation just stinks, and it needs to be looked into ASAP ๐Ÿ‘Š
 
Back
Top