South Korea Unveils Comprehensive AI Laws Amid Rising Global Concerns, But Critics Say They Fall Short.
The country has taken a bold step into regulating artificial intelligence (AI) by launching what is billed as the most comprehensive set of laws anywhere in the world. The move aims to position South Korea as one of the leading AI powers globally, but it's facing significant pushback from both tech startups and civil society groups.
The new legislation, dubbed the "AI Basic Act," requires companies providing AI services to label their creations with invisible digital watermarks for clearly artificial outputs such as cartoons or artwork. For realistic deepfakes, visible labels are required. The law also mandates risk assessments and documentation for high-impact AI systems used in medical diagnosis, hiring, and loan approvals.
However, the threshold for extremely powerful AI models is set so high that government officials acknowledge no models worldwide currently meet it. Companies that violate the rules face fines of up to 30 million won (Β£15,000), but a grace period of at least a year before penalties are imposed.
Critics argue that the law does not go far enough in protecting citizens from AI risks. A survey found that 98% of AI startups were unprepared for compliance, and experts warn of competitive imbalance. All Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds must comply.
The push for regulation comes against a uniquely charged domestic backdrop. South Korea accounts for 53% of all global deepfake pornography victims, according to a recent report. The law's origins predate this crisis, but provisions were repeatedly stalled due to industry interests over citizen protection concerns.
Civil society groups maintain that the new legislation provides limited protection for people harmed by AI systems. Four organisations issued a joint statement arguing that the law contains almost no provisions to protect citizens from AI risks. The groups noted that while the law stipulated protection for "users," those users were hospitals, financial companies, and public institutions that use AI systems, not people affected by AI.
The country's human rights commission has criticised the enforcement decree for lacking clear definitions of high-impact AI, noting that those most likely to suffer rights violations remain in regulatory blind spots. Experts say South Korea chose a different path from other jurisdictions, opting for a more flexible, principles-based framework that prioritises "trust-based promotion and regulation."
The effectiveness of this approach remains to be seen, but the global community is watching closely as countries grapple with rapidly advancing technologies.
The country has taken a bold step into regulating artificial intelligence (AI) by launching what is billed as the most comprehensive set of laws anywhere in the world. The move aims to position South Korea as one of the leading AI powers globally, but it's facing significant pushback from both tech startups and civil society groups.
The new legislation, dubbed the "AI Basic Act," requires companies providing AI services to label their creations with invisible digital watermarks for clearly artificial outputs such as cartoons or artwork. For realistic deepfakes, visible labels are required. The law also mandates risk assessments and documentation for high-impact AI systems used in medical diagnosis, hiring, and loan approvals.
However, the threshold for extremely powerful AI models is set so high that government officials acknowledge no models worldwide currently meet it. Companies that violate the rules face fines of up to 30 million won (Β£15,000), but a grace period of at least a year before penalties are imposed.
Critics argue that the law does not go far enough in protecting citizens from AI risks. A survey found that 98% of AI startups were unprepared for compliance, and experts warn of competitive imbalance. All Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds must comply.
The push for regulation comes against a uniquely charged domestic backdrop. South Korea accounts for 53% of all global deepfake pornography victims, according to a recent report. The law's origins predate this crisis, but provisions were repeatedly stalled due to industry interests over citizen protection concerns.
Civil society groups maintain that the new legislation provides limited protection for people harmed by AI systems. Four organisations issued a joint statement arguing that the law contains almost no provisions to protect citizens from AI risks. The groups noted that while the law stipulated protection for "users," those users were hospitals, financial companies, and public institutions that use AI systems, not people affected by AI.
The country's human rights commission has criticised the enforcement decree for lacking clear definitions of high-impact AI, noting that those most likely to suffer rights violations remain in regulatory blind spots. Experts say South Korea chose a different path from other jurisdictions, opting for a more flexible, principles-based framework that prioritises "trust-based promotion and regulation."
The effectiveness of this approach remains to be seen, but the global community is watching closely as countries grapple with rapidly advancing technologies.