South Korea's bold foray into regulating artificial intelligence has sparked a heated debate, with some hailing it as a model for the world and others criticizing its limitations. The country's newly enacted AI basic act, which took effect last week, is aimed at promoting industry growth while minimizing regulatory hurdles. However, local tech startups and civil society groups have expressed concerns that the law may be too restrictive or not stringent enough.
Under the new legislation, companies providing AI services will be forced to label clearly artificial outputs such as cartoons or artwork, while realistic deepfakes require visible labels. High-impact AI systems used for medical diagnosis, hiring, and loan approvals must also undergo risk assessments and document their decision-making processes. Extremely powerful AI models are required to submit safety reports, although the threshold is set so high that no current models worldwide meet it.
The government has promised a one-year grace period before penalties are imposed, with fines of up to $15,000 for non-compliance. However, critics argue that the process of self-determining whether systems qualify as high-impact AI creates uncertainty and competitive imbalance, as all Korean companies face regulation regardless of size.
Civil society groups have expressed concerns that the law does not provide sufficient protection for people harmed by AI systems, particularly in light of South Korea's alarming rates of deepfake pornography and exploitation. The country has accounted for 53% of global deepfake victims according to a recent report.
While some experts see the new legislation as a positive step towards promoting industry growth while ensuring safety and accountability, others argue that it falls short in several areas. Unlike other jurisdictions like the EU or US, which have adopted stricter risk-based regulatory models, South Korea has opted for a more flexible framework centered on "trust-based promotion and regulation".
The law's origins date back to 2020, but previous versions stalled due to provisions prioritizing industry interests over citizen protection. The current version has been criticized by civil society groups for lacking clear definitions of high-impact AI and providing limited protection for users.
As South Korea strives to become one of the world's leading AI powers alongside the US and China, it remains to be seen whether its new legislation will prove effective in balancing industry growth with public safety and accountability.
Under the new legislation, companies providing AI services will be forced to label clearly artificial outputs such as cartoons or artwork, while realistic deepfakes require visible labels. High-impact AI systems used for medical diagnosis, hiring, and loan approvals must also undergo risk assessments and document their decision-making processes. Extremely powerful AI models are required to submit safety reports, although the threshold is set so high that no current models worldwide meet it.
The government has promised a one-year grace period before penalties are imposed, with fines of up to $15,000 for non-compliance. However, critics argue that the process of self-determining whether systems qualify as high-impact AI creates uncertainty and competitive imbalance, as all Korean companies face regulation regardless of size.
Civil society groups have expressed concerns that the law does not provide sufficient protection for people harmed by AI systems, particularly in light of South Korea's alarming rates of deepfake pornography and exploitation. The country has accounted for 53% of global deepfake victims according to a recent report.
While some experts see the new legislation as a positive step towards promoting industry growth while ensuring safety and accountability, others argue that it falls short in several areas. Unlike other jurisdictions like the EU or US, which have adopted stricter risk-based regulatory models, South Korea has opted for a more flexible framework centered on "trust-based promotion and regulation".
The law's origins date back to 2020, but previous versions stalled due to provisions prioritizing industry interests over citizen protection. The current version has been criticized by civil society groups for lacking clear definitions of high-impact AI and providing limited protection for users.
As South Korea strives to become one of the world's leading AI powers alongside the US and China, it remains to be seen whether its new legislation will prove effective in balancing industry growth with public safety and accountability.