Elon Musk’s Grok ‘Undressing’ Problem Isn’t Fixed

Elon Musk's Attempts to Tackle 'Undressing' Problem Prove Inadequate.

The tech mogul has repeatedly vowed to crack down on the use of his AI platform, Grok, to generate explicit and non-consensual images. However, recent tests by researchers, journalists, and users have revealed that significant hurdles remain in the implementation of meaningful safety measures.

In an effort to curb the proliferation of "undressing" content, X has introduced new restrictions limiting users' ability to edit and generate images of real people wearing revealing clothing, such as bikinis. These updates were hastily implemented following global criticism over Grok's involvement in generating thousands of non-consensual intimate imagery and sexualized pictures.

Despite these changes, experts have pointed out that the AI platform continues to pose a risk, particularly if accessed through unverified accounts or outside of X's regulated environment. In fact, researchers have demonstrated that standalone Grok applications remain effective at producing explicit images, including photorealistic nudity.

Paul Bouchaud, a leading researcher in this area, has reported significant inconsistencies between the restrictions imposed by X and those still applicable to standalone Grok platforms. "We can still generate photorealistic nudity on Grok.com," he noted, highlighting the persistent issue of AI-generated explicit content.

Moreover, reports from various users suggest that some image generation capabilities remain accessible within X's platform, particularly when utilizing geolocation blocking. The AI Forensics team has documented over 90,000 images generated using Grok since Christmas holidays, underscoring the ongoing challenge faced by platforms attempting to regulate AI-generated explicit content.

In response to user concerns, Elon Musk recently posted on X, requesting if anyone could break Grok's image moderation systems. However, multiple users reported varied experiences with generating explicit images or videos using Grok, ranging from successful creation of nudity to strict moderation and limitations on image generation.

Ultimately, while steps have been taken by X to address the "undressing" problem, significant challenges remain in implementing effective measures that safeguard users' rights and prevent non-consensual AI-generated content.
 
idk bout x's new restrictions 🤔... they dont seem 2 b doing enuf to prevent grok from spitting out explicit pics n vids 📸👙.. im sure theres still plenty of ways 2 circumvent the system, especially if u r usin a verified account 😒. elon musk's "u can break our systems" post seemed like a pretty lol attempt 2 gather feedback from users... but at the end of the day, its all about the algorithm & how well x can monitor it 🤖💻. still, i think its good that they r tryin to tackle this issue n make changes 👍
 
I'm not surprised, tbh 🤔. Elon's all about pushing boundaries & innovating, even if it means we're still figuring out how to regulate AI-generated content 🚀. I mean, those 90k+ images are a big number 💥! It's like, we need to get smarter at detecting and preventing these non-consensual pics, rather than just relying on tech fixes 🤖. And honestly, it's kinda cool that Musk is acknowledging the issue & asking users if they can crack the system 🔒... I mean, who wouldn't wanna try? 😂 Anyway, let's keep pushing forward & finding creative solutions to these problems 💡. We got this! 👍
 
Wow 🤯 this is so weird how some people can just use AI to generate explicit images without anyone's consent 😳 it's like they're trying to outsmart the system or something 🤔 also, I think Elon Musk needs to do more to protect his users, like make it harder for them to access the platform if they're not verified accounts 💯
 
I don’t usually comment but... I feel like Elon Musk is trying to tackle this issue, and yeah, he's taking some steps in the right direction by restricting image generation, but it's just not enough 🤔. I mean, think about it, if Grok can still produce explicit images even outside of X's platform, that's a huge problem. It's like they're trying to hold water in their hands – it's gonna leak out eventually 💧. And what really gets me is how some users are still able to create explicit content within the platform, even with geolocation blocking 🤦‍♂️. I don't know, man, it feels like we're just pushing this issue around until someone finds a way to actually fix it 🔩.
 
I'm kinda disappointed but not surprised that Elon's attempts to curb explicit images on Grok still haven't quite hit the mark 🤔. It's like trying to put a lid on a fire with a flimsy plastic sheet - it might look good on paper, but in practice, it just doesn't cut it 😒. I mean, 90k+ images generated using Grok since Christmas? That's a big issue! 📊

It's not like X hasn't tried to restrict image editing or generate explicit content, but the fact that standalone Grok apps can still produce the same stuff is just a major red flag 🔥. And what's with the inconsistent moderation systems? It's all about finding that middle ground between free speech and protecting users' rights, you know? 🤝

Elon's attempt to ask users if they can break his image moderation system is a bit cheeky, but it does highlight just how much work there still is to be done 😅. I think what's needed is a more nuanced approach that takes into account all the various ways AI-generated content can be used (or abused 🤷‍♂️). We need to find a way to make safety measures effective without stifling creativity or freedom of expression... easier said than done, I know 🙃.
 
I'm really disappointed with how easy it is for Grok to keep generating explicit images 🤦‍♂️. I mean, Elon Musk said he'd crack down on this stuff but it's like they're just poking holes in their own safety net. These new restrictions are a good start, but if users can still find ways to get around them, what's the point? 😒

I think what really bothers me is that there are people out there who aren't even aware of how vulnerable they are to AI-generated explicit content. It's like, you're scrolling through your socials and suddenly you're hit with a bunch of pics that make you feel uncomfortable or creeped out. And by the time you figure out what's going on, the images have already been shared everywhere 🤯.

It's not just about X's platform either – it's a bigger issue when you think about how AI-generated content can spread and cause real harm to people's lives. We need more research, more awareness, and more effective solutions that actually work 💡.
 
Come on 🤷‍♂️, Elon Musk is doing a great job here! I mean, who needs actual safety measures when you can just limit the editing features on your platform? That's like trying to stop a tsunami with a broken umbrella 😂. And honestly, if users are still managing to create explicit images despite these "restrictions", it just shows how good Grok is at its job! 🤖 I'm not buying all this fuss about "undressing" content being a big problem... it's just a bunch of over-sensitivity and bad PR for the tech giants 💁‍♀️.
 
🤔 The thing is, I don't think Elon Musk's team really gets it yet... they're trying so hard to control Grok but there are still ways around it 🚫. Like, if you create a standalone app or use unverified accounts, the AI is still gonna make explicit images 😷. And even with all these new restrictions on X, researchers can still find loopholes and generate explicit content for free 🤦‍♂️. It's like, they need to think outside the box (or in this case, the platform) and come up with some real solutions that prevent non-consensual AI-generated images from spreading 💻.
 
🤔 I mean, come on... Elon Musk's team thinks a simple tweak is gonna fix the whole issue? Like, they can't even get it right with their own platform? 🚫 It's still super easy for people to generate explicit images using Grok, and that's not just because of standalone apps. The fact that multiple users are getting different experiences just goes to show how flawed this whole system is.

And let's be real, what's the point of restricting certain types of edits if you're not gonna do a full overhaul of the platform? It's like putting a Band-Aid on a bullet wound 😒. We need real solutions here, not half-measures that are just gonna let the problem creep back in.

I'm all for innovation and progress, but we can't keep relying on tech companies to police their own platforms 🤦‍♂️. Users need more control over what they're creating and sharing online. That's how we're gonna get rid of this whole "undressing" problem once and for all 💪
 
I'm still salty about Grok 🤦‍♂️. Like, what's up with Elon thinking these updates would be enough? It sounds like he's just ticking boxes to placate the masses without actually addressing the root issue. I mean, if researchers can still create explicit pics on standalone Grok apps, that's a major flaw. And don't even get me started on geolocation blocking – it's like they're not even trying 😒. X needs to step up their game and prioritize user safety over convenience. Until then, I'll be keeping an eye (and distance) from this whole thing 👀.
 
🤔 I'm not surprised Elon Musk's efforts to curb the use of Grok for generating explicit images aren't working out as planned 🙄. It's like trying to put a lid on a leaky bucket, no matter how hard you try, the issue just keeps seeping through 💧. The fact that researchers can still generate photorealistic nudity using standalone Grok apps is a major concern 🚨. And what really gets my goat is when users report having mixed results trying to use Grok within X's platform 🤯. It's like they're not even trying hard enough 😒. As a parent, I just want to know that the platforms I trust are doing everything in their power to keep my kids safe online 💕. But if AI-generated explicit content keeps slipping through the cracks, then we have a bigger problem on our hands 🤦‍♀️.
 
I'm so frustrated with Elon Musk's approach to tackling this issue 🤯. I mean, introducing new restrictions is a good start, but it's clear that more needs to be done to address the root of the problem. The fact that standalone Grok apps are still able to generate explicit images is a huge concern and highlights how X needs to do more to regulate the platform. I think we need to have an open conversation about what it means for tech companies like X to take responsibility for the content created on their platforms. It's not just about implementing rules, but also about providing resources and support for users who are being affected by this issue 🤝. We can't keep putting Band-Aid solutions in place when we need more comprehensive fixes.
 
I'm still low-key shocked Elon's team is being outsmarted by their own AI 🤯. Like, I get it, they're trying to keep up with the pace of these new tech advancements, but come on... 90k images and that's just Christmas break? It's like they forgot about this whole 'safety' thing 😅. And honestly, who thought it was a good idea for him to ask users if they could break their moderation systems? That's just an open invite for all the troublemakers 🤪. I mean, X's trying, but it feels like they're playing catch-up with this whole AI-generated explicit content thing. Can't say I'm holding my breath for meaningful change anytime soon 😐
 
come on elon musk what's the deal with grok u r tryna fix this but it still seems like a sieve tbh i mean i get it he tried to make some changes but its not like u can just magically fix a multi billion dollar problem w/ one update lol also what even is geolocation blocking supposed 2 do here? its not like ur gonna be able to track ppl down 2 real life locations from an image generated on the platform 🤔
 
Back
Top