South Korea's 'world-first' AI laws face pushback amid bid to become leading tech power

South Korea Unveils Comprehensive AI Laws Amid Rising Global Concerns, But Critics Say They Fall Short.

The country has taken a bold step into regulating artificial intelligence (AI) by launching what is billed as the most comprehensive set of laws anywhere in the world. The move aims to position South Korea as one of the leading AI powers globally, but it's facing significant pushback from both tech startups and civil society groups.

The new legislation, dubbed the "AI Basic Act," requires companies providing AI services to label their creations with invisible digital watermarks for clearly artificial outputs such as cartoons or artwork. For realistic deepfakes, visible labels are required. The law also mandates risk assessments and documentation for high-impact AI systems used in medical diagnosis, hiring, and loan approvals.

However, the threshold for extremely powerful AI models is set so high that government officials acknowledge no models worldwide currently meet it. Companies that violate the rules face fines of up to 30 million won (Β£15,000), but a grace period of at least a year before penalties are imposed.

Critics argue that the law does not go far enough in protecting citizens from AI risks. A survey found that 98% of AI startups were unprepared for compliance, and experts warn of competitive imbalance. All Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds must comply.

The push for regulation comes against a uniquely charged domestic backdrop. South Korea accounts for 53% of all global deepfake pornography victims, according to a recent report. The law's origins predate this crisis, but provisions were repeatedly stalled due to industry interests over citizen protection concerns.

Civil society groups maintain that the new legislation provides limited protection for people harmed by AI systems. Four organisations issued a joint statement arguing that the law contains almost no provisions to protect citizens from AI risks. The groups noted that while the law stipulated protection for "users," those users were hospitals, financial companies, and public institutions that use AI systems, not people affected by AI.

The country's human rights commission has criticised the enforcement decree for lacking clear definitions of high-impact AI, noting that those most likely to suffer rights violations remain in regulatory blind spots. Experts say South Korea chose a different path from other jurisdictions, opting for a more flexible, principles-based framework that prioritises "trust-based promotion and regulation."

The effectiveness of this approach remains to be seen, but the global community is watching closely as countries grapple with rapidly advancing technologies.
 
[Image of a person with a blank expression, eyes closed, surrounded by flashing warning lights]

[ GIF of a robot with a concerned face, shrugging its shoulders ]

[ Image of a scale with one side labeled "Protection" and the other side labeled "Compliance", with a big gap between them ]
 
I'm not sure if South Korea just decided to regulate AI out of guilt or because they actually want to prevent deepfakes from being used to ruin people's lives πŸ€¦β€β™‚οΈ. Like, 53% of global deepfake victims are in one country? That's wild. But I guess when you're dealing with something like that, you've got to do something. The new law is a good start, but it feels like a Band-Aid solution at best.

I'm also kinda curious about this "trust-based promotion and regulation" framework the government is using πŸ€”. Sounds like they're trying to strike a balance between giving companies some leeway and protecting citizens. But what happens when those two worlds collide? And how do we even define what's considered "high-impact AI"? It sounds like a minefield just waiting to happen πŸ’₯.

Anyway, kudos to South Korea for taking the lead on this issue. I hope it sets a good precedent for other countries 🀞. But hey, only time will tell if this law actually makes a difference or just becomes another bureaucratic headache πŸ™„.
 
πŸ€” I think South Korea's move on AI laws is kinda interesting. They're trying to get ahead of the game, which is smart. But, I gotta agree with critics that the law might not be strict enough. The whole invisible digital watermark thing just seems like a fancy way of saying 'we can't even start to measure this'. It's almost like they're putting all their trust in tech companies being responsible? πŸ€·β€β™‚οΈ

I mean, have you seen those deepfake videos that are flooding the internet? It's crazy how easily someone can create realistic AI-generated content. South Korea's got a problem with it already, and now they're just introducing more laws like this? It feels like they're playing catch-up. 🚨
 
πŸ€” I'm not sure if South Korea's new AI laws are too little, too late. Like, seriously, 98% of AI startups are already struggling to keep up? 😱 It feels like they're playing catch-up rather than getting ahead of the game. And what about those foreign firms that don't have to comply? Are they just gonna swoop in and dominate the market while Korean companies are stuck following the rules?

The whole thing just seems so... reactive. πŸ€– The law's focus on "trust-based promotion and regulation" sounds like a nice phrase, but is it really doing anything concrete to protect citizens? I mean, if 53% of all global deepfake pornography victims are South Korean, shouldn't that be a major priority? 🚨

We need more than just vague promises and fancy frameworks. We need real enforcement and accountability. πŸ’― Until then, I'm not convinced this law is gonna make a significant difference in the long run. 😐
 
I think its kinda weird how some companies are saying they dont want stricter laws on AI πŸ€”πŸ‘€ But at the same time, their profits depend on it πŸ’Έ I mean, 98% of AI startups were unprepared for compliance and that's a big problem πŸš¨πŸ’»
 
Wow 🀯! I'm interesting in how South Korea is taking AI regulation super seriously and trying to set a new standard. But at the same time, I'm like, why not cover more citizens? The whole deepfake thing is wild and I feel bad for those who got affected. The law seems pretty good on paper but what's gonna happen when it comes to actual enforcement? πŸ€”
 
πŸ€” I'm kinda thinking they need 2 make it more concrete, u know? This AI Basic Act seems like a good start but critics are saying it's not doing enough 2 protect people from all these risks. Like, what about the people who get affected by deepfakes? They should have more protection, no matter if they're citizens or not.

And I'm not sure how fair it is that foreign firms have to follow some of these rules but Korean companies don't, regardless of size. That doesn't seem right, tbh. And what's with the threshold for super powerful AI models? It seems like they're kinda making it up as they go along.

I also think the law could benefit from clearer definitions of high-impact AI and more concrete penalties for non-compliance. This whole "trust-based promotion and regulation" thing sounds nice on paper but it's not always clear how that'll play out in practice.

Anyway, I guess we'll just have to wait and see how this all plays out πŸ€·β€β™‚οΈ. What do u guys think? Should South Korea be doing more 2 regulate AI or is this a good start?
 
Wow! 🀯 I'm kinda surprised that they're trying so hard to regulate AI without considering if it's actually effective πŸ€”. Like, setting a super high threshold for those crazy powerful models and only making fines apply after like, a year... that seems pretty lenient 😊. And what's up with not providing clear definitions of "high-impact" AI? That's just asking to create more loopholes πŸ”€. Still, I guess it's better than nothing πŸ™.
 
πŸ˜… I'm totally stoked about South Korea taking bold steps on AI regulations!!! πŸ€– They're basically setting a new standard for the whole world! πŸ’₯ I mean, who wouldn't want to protect citizens from deepfake porn and medical AI diagnosis gone wrong? 😱 The invisible watermarks are so cool too! πŸ‘€ Can you imagine having to label your AI art as "not real"? 🀣

But, like, seriously though... why did they set the threshold so high for super powerful AI models?! πŸ€” It's kinda ridiculous that no one meets it yet. πŸ™„ And I feel bad for those AI startups that didn't prepare for compliance 😳. They're probably gonna get fined big time!

And omg, 98% of AI startups are unprepared? That's like, crazy! 🀯 How are we supposed to trust these companies with our data and lives if they can't even follow the rules?! 🚫

I'm all for South Korea taking a lead on this... but it'd be awesome if other countries followed suit with similar laws πŸ“ˆ. It's like, AI regulation is the new cybersecurity πŸ€–πŸ’»!
 
ai laws in south korea πŸ€”

its like theyre trying to catch a wave before its too late, but are they prepared for the ripples afterwards? 🌊

the 30 million won fine seems low compared to what some of these companies are raking it in πŸ’Έ

its not just about the tech startups and big corps tho, what about the average joe who gets duped by deepfakes or whatever else ai is used for? πŸ€·β€β™‚οΈ
 
😊 this law feels kinda like they're trying to put a Band-Aid on a bigger problem... i mean, 98% of AI startups are unprepared for compliance? that's a big red flag! and what's up with the threshold being so high that no models worldwide meet it? πŸ€” seems like they should've done more research or consulted with experts outside of Korea.
 
πŸ€” The AI Basic Act is like setting a fire safety standard in South Korea - everyone has to follow it, but are we really prepared for the actual consequences? 🚨 30 million won fine is nothing compared to the potential damage caused by rogue AI systems... I mean, if the threshold for extreme powerful AI models is so high that no one meets it, isn't that just a ticking time bomb waiting to happen? 😬
 
I'm thinking, what's going on here? πŸ€” South Korea's got a new AI law, but it's like they're playing catch-up, you know? Like, other countries are already dealing with deepfake porn and AI-powered propaganda, but not us in S.Korea! πŸ™…β€β™‚οΈ And now we're trying to regulate this stuff, but I'm still not sure if our approach is the right one.

I mean, 98% of AI startups are unprepared for compliance? That's like saying, "Hey, just start building your own regulations and hope for the best!" πŸ™„ And what about foreign firms that don't even meet the threshold? They're basically getting a free pass! πŸ€·β€β™‚οΈ

And then there's this whole concept of trust-based promotion and regulation. What does that even mean? Sounds like some fancy policy jargon to me. Can't we just be clear about what we want: protection for citizens from AI risks, or whatever?! It's like they're trying to make it sound all nuanced and complicated on purpose! πŸ™ƒ

But hey, I guess South Korea's trying to position itself as an AI leader globally. That's cool, but at what cost? Are we just piling more regulation on top of the tech industry without considering how that affects innovation? 🀝
 
Back
Top