South Korea's Bold Attempt to Regulate Artificial Intelligence Falls Prey to Backlash Amid Global Unease.
The South Korean government has set its sights on becoming one of the world's leading powers in artificial intelligence, with the launch of a comprehensive set of laws aimed at regulating the rapidly advancing technology. The 'AI Basic Act,' which came into effect last week, is being hailed as a model for other countries to follow but has already encountered fierce pushback from local tech startups and civil society groups.
Critics argue that the new legislation goes too far, imposing excessive regulations on companies providing AI services without adequately addressing the needs of smaller players or foreign firms. The law requires companies to label AI-generated content and conduct risk assessments for high-impact AI systems, including those used in medical diagnosis and hiring decisions.
However, many are questioning whether the rules are sufficient to protect citizens from potential harm caused by AI systems. With South Korea accounting for 53% of all global deepfake pornography victims, a growing sense of unease is palpable. Civil society groups have expressed concern that the new legislation provides limited protection for people harmed by AI systems.
One major criticism is that the law's exemption provision for "human involvement" creates significant loopholes, leaving many vulnerable individuals without adequate recourse. The country's human rights commission has also criticized the enforcement decree for lacking clear definitions of high-impact AI, placing those most likely to suffer rights violations in regulatory blind spots.
Experts note that South Korea has opted for a more flexible framework than other jurisdictions, relying on "trust-based promotion and regulation" rather than a strict risk-based model. While this approach may serve as a useful reference point in global AI governance discussions, it remains to be seen whether the country's bold attempt to regulate artificial intelligence will ultimately yield positive results.
The government has promised a grace period of at least a year before penalties are imposed for non-compliance, with fines of up to 30 million won (Β£15,000) available for those that violate the rules. However, many are left wondering whether these measures will be enough to prevent companies from pushing back against the new regulations.
As South Korea strives to establish itself as a leading player in the rapidly evolving AI landscape, it is clear that navigating the complex web of regulatory frameworks and industry standards will require careful consideration and cooperation between government agencies, civil society groups, and tech companies.
The South Korean government has set its sights on becoming one of the world's leading powers in artificial intelligence, with the launch of a comprehensive set of laws aimed at regulating the rapidly advancing technology. The 'AI Basic Act,' which came into effect last week, is being hailed as a model for other countries to follow but has already encountered fierce pushback from local tech startups and civil society groups.
Critics argue that the new legislation goes too far, imposing excessive regulations on companies providing AI services without adequately addressing the needs of smaller players or foreign firms. The law requires companies to label AI-generated content and conduct risk assessments for high-impact AI systems, including those used in medical diagnosis and hiring decisions.
However, many are questioning whether the rules are sufficient to protect citizens from potential harm caused by AI systems. With South Korea accounting for 53% of all global deepfake pornography victims, a growing sense of unease is palpable. Civil society groups have expressed concern that the new legislation provides limited protection for people harmed by AI systems.
One major criticism is that the law's exemption provision for "human involvement" creates significant loopholes, leaving many vulnerable individuals without adequate recourse. The country's human rights commission has also criticized the enforcement decree for lacking clear definitions of high-impact AI, placing those most likely to suffer rights violations in regulatory blind spots.
Experts note that South Korea has opted for a more flexible framework than other jurisdictions, relying on "trust-based promotion and regulation" rather than a strict risk-based model. While this approach may serve as a useful reference point in global AI governance discussions, it remains to be seen whether the country's bold attempt to regulate artificial intelligence will ultimately yield positive results.
The government has promised a grace period of at least a year before penalties are imposed for non-compliance, with fines of up to 30 million won (Β£15,000) available for those that violate the rules. However, many are left wondering whether these measures will be enough to prevent companies from pushing back against the new regulations.
As South Korea strives to establish itself as a leading player in the rapidly evolving AI landscape, it is clear that navigating the complex web of regulatory frameworks and industry standards will require careful consideration and cooperation between government agencies, civil society groups, and tech companies.