The world of social media is facing a growing crisis: how to verify the age of its users without compromising their privacy. Governments and lawmakers are under pressure to regulate online platforms, particularly those used by children, while tech companies are struggling to find a balance between user safety and digital surveillance.
TikTok has taken a significant step in addressing this issue with the launch of its new age-detection system across Europe. The system uses a combination of profile data, content analysis, and behavioral signals to identify accounts that may belong to minors under the age of 13. Unlike some other platforms, TikTok does not automatically ban users; instead, it flags suspicious accounts for human moderators to review.
However, experts say that this approach still requires social platforms to surveil users more closely than before. Alice Marwick, director of research at Data & Society, argues that the system relies on probabilistic guesses and can lead to errors and bias, particularly affecting groups that TikTok's moderators do not have cultural familiarity with.
Moreover, the process of age verification raises broader questions about whether technology alone can resolve what is fundamentally a policy and societal challenge. Marwick worries that the current system creates friction and data collection without necessarily improving outcomes for users.
Some argue that age-verification mandates are "segregate-and-suppress laws" that unfairly target certain groups, including children and minors. Jess Miers, an assistant professor at the University of Akron School of Law, notes that the legal exposure in the US is significantly higher, making it harder to implement this approach without facing First Amendment litigation.
In contrast, organizations like the Canadian Centre for Child Protection advocate for site-wide bans on social media access for children under 16. Lloyd Richardson, director of technology at the organization, suggests that developmental experts should be consulted to determine what is developmentally appropriate online.
Ultimately, the debate around online child safety highlights the need for a balanced approach that addresses both user safety and digital privacy concerns. While age-verification systems like TikTok's may seem like a better solution than automatic bans, they still require social platforms to surveil users more closely.
TikTok has taken a significant step in addressing this issue with the launch of its new age-detection system across Europe. The system uses a combination of profile data, content analysis, and behavioral signals to identify accounts that may belong to minors under the age of 13. Unlike some other platforms, TikTok does not automatically ban users; instead, it flags suspicious accounts for human moderators to review.
However, experts say that this approach still requires social platforms to surveil users more closely than before. Alice Marwick, director of research at Data & Society, argues that the system relies on probabilistic guesses and can lead to errors and bias, particularly affecting groups that TikTok's moderators do not have cultural familiarity with.
Moreover, the process of age verification raises broader questions about whether technology alone can resolve what is fundamentally a policy and societal challenge. Marwick worries that the current system creates friction and data collection without necessarily improving outcomes for users.
Some argue that age-verification mandates are "segregate-and-suppress laws" that unfairly target certain groups, including children and minors. Jess Miers, an assistant professor at the University of Akron School of Law, notes that the legal exposure in the US is significantly higher, making it harder to implement this approach without facing First Amendment litigation.
In contrast, organizations like the Canadian Centre for Child Protection advocate for site-wide bans on social media access for children under 16. Lloyd Richardson, director of technology at the organization, suggests that developmental experts should be consulted to determine what is developmentally appropriate online.
Ultimately, the debate around online child safety highlights the need for a balanced approach that addresses both user safety and digital privacy concerns. While age-verification systems like TikTok's may seem like a better solution than automatic bans, they still require social platforms to surveil users more closely.