South Korea's 'world-first' AI laws face pushback amid bid to become leading tech power

South Korea's bold foray into regulating artificial intelligence has sparked a heated debate, with some hailing it as a model for the world and others criticizing its limitations. The country's newly enacted AI basic act, which took effect last week, is aimed at promoting industry growth while minimizing regulatory hurdles. However, local tech startups and civil society groups have expressed concerns that the law may be too restrictive or not stringent enough.

Under the new legislation, companies providing AI services will be forced to label clearly artificial outputs such as cartoons or artwork, while realistic deepfakes require visible labels. High-impact AI systems used for medical diagnosis, hiring, and loan approvals must also undergo risk assessments and document their decision-making processes. Extremely powerful AI models are required to submit safety reports, although the threshold is set so high that no current models worldwide meet it.

The government has promised a one-year grace period before penalties are imposed, with fines of up to $15,000 for non-compliance. However, critics argue that the process of self-determining whether systems qualify as high-impact AI creates uncertainty and competitive imbalance, as all Korean companies face regulation regardless of size.

Civil society groups have expressed concerns that the law does not provide sufficient protection for people harmed by AI systems, particularly in light of South Korea's alarming rates of deepfake pornography and exploitation. The country has accounted for 53% of global deepfake victims according to a recent report.

While some experts see the new legislation as a positive step towards promoting industry growth while ensuring safety and accountability, others argue that it falls short in several areas. Unlike other jurisdictions like the EU or US, which have adopted stricter risk-based regulatory models, South Korea has opted for a more flexible framework centered on "trust-based promotion and regulation".

The law's origins date back to 2020, but previous versions stalled due to provisions prioritizing industry interests over citizen protection. The current version has been criticized by civil society groups for lacking clear definitions of high-impact AI and providing limited protection for users.

As South Korea strives to become one of the world's leading AI powers alongside the US and China, it remains to be seen whether its new legislation will prove effective in balancing industry growth with public safety and accountability.
 
idk about this new ai regulation in south korea ๐Ÿค”... on one hand, i get that they wanna promote industry growth but at what cost? ๐Ÿค‘ it feels like the gov is still playing catch up on ai safety ๐Ÿšจ. the fact that deepfakes are super prevalent in south korea and now they're gonna label them with a warning sign? ๐Ÿšซ not gonna stop people from exploiting others, you know? ๐Ÿ’” also, what's with the "trust-based promotion" framework? does that even make sense? ๐Ÿคทโ€โ™€๏ธ seems like it's just gonna let big corps off the hook while small startups get left behind... ๐Ÿ“‰ gotta keep an eye on this one, tbh ๐Ÿ‘€
 
idk why they're so worried about ai labels ๐Ÿค”. i mean, deepfakes are super scary but like, labeling cartoons is just basic lol ๐ŸŽจ. and fines of $15k seem pretty reasonable for a country that's still catching up on tech ๐Ÿค‘. i think the gov is trying to find a balance between innovation and safety, and it's not all or nothing ๐Ÿค. maybe they should've asked the public more directly about what they want, but at least they're trying ๐Ÿ—ฃ๏ธ.
 
I think this is a pretty bold move by South Korea ๐Ÿค”. On one hand, I get why they want to regulate AI more strictly - deepfakes are getting out of control and it's affecting people's lives ๐Ÿšจ. But on the other hand, I'm not sure if their approach will work...I mean, how can you define what makes an AI high-impact? It feels like they're trying to balance industry growth with public safety, but I think we need more clarity on this ๐Ÿคทโ€โ™€๏ธ.

It's also weird that they're giving companies a year to comply before penalties kick in...that just seems like a way for them to avoid making any real changes ๐Ÿ’ธ. And what about the threshold for reporting safety concerns? It's so high that it's basically saying "good luck" ๐Ÿคช. I just hope that South Korea can figure out this AI thing before it gets completely out of control ๐Ÿ˜ฌ.

I also wonder if other countries are watching and thinking, "wait a minute, how did they do that?" ๐ŸŽฅ. They could definitely learn from each other's experiences...maybe the EU or US have better systems in place? ๐Ÿค”
 
this is like, a perfect example of the balance we need in everything ๐Ÿคฏ... on one hand, we gotta protect ourselves from harm, but on the other hand, we can't suffocate innovation just because it's new & exciting ๐Ÿ’ก... south korea's trying to find that middle ground with their ai regulation law, but it's not perfect, and that's what makes it relatable ๐Ÿค”... if you think about it, our personal safety is like the "high-impact" part - we need protection from AI-related risks ๐Ÿšจ, but at the same time, too much restriction can stifle growth & progress ๐ŸŒฑ... maybe they're learning from their mistakes & trying to do better? ๐Ÿคž only time will tell if this law really works in the long run ๐Ÿ’ฏ
 
I feel bad for South Korea's approach to regulating AI ๐Ÿค”. I think they're trying to be pioneers and set a good example for the rest of the world, but maybe they went too far? The 1-year grace period is a nice touch, though ๐Ÿ‘. I'm not sure if their "trust-based promotion and regulation" framework will work as well as other countries' risk-based models. The problem with lack of clear definitions of high-impact AI is that it's hard to regulate something you can't even define properly ๐Ÿคฏ. And, y'know, some people might say they're being a bit too lenient on companies ๐Ÿค‘. Still, I think it's great that they're trying to address the whole deepfake issue โ€“ 53% of global victims? That's crazy! ๐Ÿšจ. Maybe we'll see how this all plays out and if South Korea can find that perfect balance between growth and safety ๐Ÿ˜Š.
 
I'm still trying to wrap my head around this AI basic act ๐Ÿค”... so basically, South Korea is requiring companies to label cartoonish stuff as 'artificial' and deepfakes as 'deepfake' ๐Ÿ“บ, but what about the actual impact on society? Like, what if someone's life is ruined by a deepfake? The law seems to be more focused on protecting industry interests than actual people ๐Ÿค‘. And don't even get me started on this "trust-based promotion and regulation" thing... sounds like just a fancy way of saying 'we're not sure how to regulate this, so we'll just trust the companies' ๐Ÿ’ป. I need some real sources behind these claims before I can even consider this a model for the world ๐Ÿ“Š
 
The AI law thing is kinda interesting ๐Ÿค”. I mean, on one hand, you got these tech startups freaking out because they don't wanna label their cartoons ๐Ÿ˜‚, but on the other hand, you got people getting hurt by deepfakes and stuff ๐Ÿ’”. And it's weird that Korea's law is like, super relaxed compared to other countries ๐Ÿคทโ€โ™‚๏ธ.

I think what's crazy is how Korea's got a 53% share of global deepfake victims, that's wild ๐Ÿ˜ฒ. And the law's all about promoting industry growth, but at what cost? ๐Ÿค‘ Are we really putting people over profits here? ๐Ÿ’ธ

It's like, I get where they're coming from with wanting to be an AI leader and all, but it feels like Korea's gonna end up playing catch-up instead of being a trailblazer ๐Ÿ‘€. We'll see how this whole thing plays out, maybe it'll work, or maybe it'll be more like that one time when the law was introduced in 2020 ๐Ÿคฆโ€โ™‚๏ธ and nobody knew what to do ๐Ÿ˜…
 
I think this law is super interesting ๐Ÿค”, but also kinda worrying ๐Ÿšจ. On one hand, it's awesome that South Korea is taking AI regulation seriously ๐Ÿ™Œ, especially since deepfakes have become a huge issue there. The labels on cartoons and artwork are gonna help consumers make informed decisions ๐Ÿ‘€.

But, on the other hand, I'm not sure if this law is going to have a real impact ๐Ÿ’ฏ. The threshold for high-impact AI is pretty low ๐Ÿค”, so how many companies actually qualify? And what about those who don't fit into that category but still need to be regulated? It's like they're getting a free pass ๐Ÿ˜ฌ.

I also feel bad for the victims of deepfake exploitation ๐Ÿค•. The law needs better protection for them ๐Ÿ‘ฎโ€โ™€๏ธ. It's not just about the tech companies; it's about the people who get hurt by these systems ๐Ÿ’”.

One thing I'd love to see is more transparency and clearer definitions ๐Ÿ”. This "trust-based promotion" framework doesn't quite sit right with me ๐Ÿคทโ€โ™€๏ธ. How are we supposed to trust AI when we don't know what it's capable of? ๐Ÿคฏ

Anyway, South Korea's definitely taking a bold step ๐Ÿ’ช, but only time will tell if it's the right one ๐Ÿ•ฐ๏ธ.
 
AI law is a joke in Korea ๐Ÿคฃ... "trust-based promotion and regulation" sounds like a bunch of corporate speak for "we're not really doing anything about the risks". I mean, who doesn't want to be protected from deepfakes and AI-powered exploitation? But until they can even define what high-impact AI is, it's all just hot air. And $15k fines are basically peanuts compared to the potential damage of a faulty AI system. The EU and US are killing it with their risk-based regulatory models, Korea should take notes ๐Ÿ“๐Ÿ’ป
 
I'm low-key worried about this new AI law ๐Ÿค”. I mean, on one hand, it's cool that South Korea is taking steps towards regulating AI, but at the same time, I feel like they're being a bit too relaxed ๐Ÿคทโ€โ™‚๏ธ. Like, what's with the whole trust-based promotion and regulation thing? Doesn't that just open up more loopholes for companies to exploit? And yeah, the fine of $15,000 doesn't seem like enough to deter major players ๐Ÿค‘. But at the same time, I get where they're coming from - we don't want AI systems messing with our lives and causing harm ๐Ÿšจ. So maybe this law is a step in the right direction, but it feels like it needs some more tweaks to make it super solid ๐Ÿ’ช.
 
I'M NOT IMPRESSED WITH SOUTH KOREA'S NEW AI REGULATIONS YET!!! THEY'RE TAKING A BIAS TOWARDS BIG TECH COMPANIES OVER SMALL STARTUPS! IT JUST WON'T CUT IT IF WE WANT TO PROTECT PEOPLE FROM DEEPFAKE PORNOGRAHY AND OTHER BAD AI STUFF. THE CURRENT LAYOUT IS TOO FLACID, ESPECIALLY WHEN IT COMES TO SAFETY REPORTS - WHO'S GOING TO DO THOSE ANYWAY?!?
 
๐Ÿค” I'm so done with these new regulations on AI services in S Korea. The labeling thing is gonna be a nightmare for businesses and consumers alike. Like, do we really need labels just to know if our cartoons are AI-generated or not? ๐ŸŽจ It's just too much extra work for the average Joe. And what about deepfakes? Shouldn't we have more robust safeguards in place considering how prevalent they are and the harm they can cause? The whole system feels a bit toothless, imo. ๐Ÿคทโ€โ™‚๏ธ
 
I'm not sure about this new law ๐Ÿค”. On one hand, I get the need for regulation, especially when it comes to deepfakes which are getting out of control ๐Ÿšจ. It's crazy that South Korea has 53% of global victims ๐Ÿ˜ฑ. But at the same time, I think the threshold for high-impact AI is kinda low ๐Ÿ”ฅ. I mean, some companies might be able to game the system and get away with stuff without proper risk assessments.

And what about the fines? $15,000 isn't exactly a scare tactic ๐Ÿค‘. I worry that this law will create more uncertainty than clarity, especially for small startups who don't have the resources to deal with all these new regulations. And what about the users? Will they really get the protection they need from AI exploitation?

It's interesting to see how other countries like the EU and US are taking a risk-based approach ๐Ÿ“ˆ. Maybe South Korea can learn from them and create an even better framework that balances industry growth with public safety ๐Ÿ˜Š.
 
AI thingy... you know what's wild? Virtual reality headsets are getting so good now that I can kinda feel like I'm inside a cartoon ๐Ÿค–๐ŸŽจ. Like, have you guys tried those new Samsung ones? They're insane! The graphics are so realistic it feels like I'm right there in the game. But sometimes I wonder... do we really want to be so immersed in virtual worlds that we forget what's real? ๐Ÿค”
 
I think the new law is a bit overhyped ๐Ÿค”. I mean, how realistic is it that Korean companies are gonna follow these rules without any major pushback? It's just too easy to game the system and find loopholes. And what about when you have no standards in place for high-impact AI, it's basically just a crapshoot ๐Ÿ˜’. Plus, the fine of $15k is pretty paltry compared to how much some of these companies are making off this AI tech. And don't even get me started on the whole "trust-based promotion and regulation" thing - that sounds like just corporate-speak for "we're gonna let you play fast and loose as long as we're happy with it ๐Ÿ˜".
 
I'm low-key worried about this whole thing lol ๐Ÿค”. I mean, I get where they're trying to promote industry growth but 15k fines for non-compliance? that's a lot ๐Ÿ’ธ. And what's up with the threshold for high-impact AI? like, it's already hard enough for smaller companies to compete with the giants in this space ๐Ÿค–. And don't even get me started on deepfakes - our country is basically a hotbed for those things and I'm not sure if labeling them is gonna be enough to stop the exploitation ๐Ÿšซ.

I'm all for trust-based promotion but where's the clear definition of high-impact AI? it feels like they're winging it here ๐Ÿคทโ€โ™‚๏ธ. And what about the users? civil society groups are right, we need more protection for people who get hurt by these systems ๐Ÿ’”. I guess only time will tell if this law actually works as intended ๐Ÿ”ฎ.
 
I'm not sure about this new AI law in South Korea... seems like they're trying to balance growth with safety but I think it might be a bit too relaxed for my taste ๐Ÿ˜. The whole labeling thing is a good start, but what if some companies just find ways around it? And the lack of clear definitions on high-impact AI has me worried - how are we supposed to know when an AI system is being used in a way that's really harming people? ๐Ÿค”. I'm also curious to see how this plays out with the deepfake stuff... it feels like they're not taking enough action to protect users, especially since South Korea is one of the worst countries for deepfakes ๐Ÿšจ.
 
I think its a good start, but like they say, you can't build a house on shaky ground ๐Ÿค”. I mean, the whole "trust-based promotion and regulation" thing sounds nice in theory, but when it comes down to it, who's going to actually hold those companies accountable for their actions? The one-year grace period doesn't feel like enough time to work out all the kinks, and $15k isn't exactly a huge fine ๐Ÿค‘. I'm also not sure how effective those labels on AI outputs are going to be - I mean, can you really just slap a sticker on something and expect people to know what's going on? ๐Ÿ˜’
 
I'm so hyped about this news lol! I think the gov is taking a huge step forward by introducing regulations on AI. Like, we can't just let companies run wild with these powerful techs without some kind of oversight ๐Ÿคฏ. The whole labeling thing is a good start - it's like, if you're gonna use AI to create cartoons or whatever, you gotta tell people it's not real news ๐Ÿ˜‚.

But at the same time, I'm all about that transparency ๐Ÿ’โ€โ™€๏ธ. These safety reports and risk assessments need to be super detailed so we can trust that companies are being honest about their techs. And what's up with the $15k fine? Like, that's nothing compared to what people could lose if they get sued for deepfake-related issues ๐Ÿค‘.

I'm also low-key worried about the impact on smaller startups ๐Ÿค”. If all Korean companies have to follow the same rules regardless of size, it might stifle innovation and growth. But I guess we just gotta see how this whole thing plays out ๐Ÿ’ช. One year is a pretty chill grace period - hoping they get it right next time ๐Ÿคž.
 
Back
Top