WELLINGTON, New Zealand (AP) — Australian officials announced Friday that social media platforms have deactivated or restricted approximately 4.7 million accounts belonging to minors since the implementation of the nation’s groundbreaking under-16 social media prohibition in December. The sweeping ban represents one of the world’s most aggressive regulatory actions against technology companies concerning child protection.
Communications Minister Anika Wells declared the measure a victory for Australian families, stating: “We confronted some of the world’s most powerful corporations and their supporters who claimed this was impossible. Australian parents can now feel assured that their children can reclaim their childhoods.”
The comprehensive data, submitted to Australia’s government by ten major social media platforms, provides the first quantitative assessment of the policy’s impact. The legislation emerged from mounting concerns about harmful digital environments affecting youth development, triggering intense national debates about technology usage, privacy rights, child safety protocols, and mental health implications.
Under Australia’s regulatory framework, prominent platforms including Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X, YouTube, and Twitch face potential penalties exceeding AU$49.5 million (US$33.2 million) for non-compliance with age verification requirements. Messaging services such as WhatsApp and Facebook Messenger remain exempt from these restrictions.
Platforms employ triple-verification methodologies: requesting official identification documents, utilizing third-party facial age estimation technology, or making inferences from existing account metadata including account longevity.
According to eSafety Commissioner Julie Inman Grant, Australia’s 2.5 million children aged 8-15 previously demonstrated an 84% social media penetration rate among 8-12 year-olds. The commissioner characterized the 4.7 million account removals as “encouraging” progress in protecting minors from predatory digital practices.
Meta, parent company to Facebook, Instagram, and Threads, reported eliminating approximately 550,000 accounts belonging to suspected underage users within the policy’s first operational day. Despite compliance, Meta criticized the regulatory approach in an official blog post, warning that smaller exempt platforms might not prioritize safety measures and that algorithmic content delivery remains unaddressed.
The policy garnered substantial support from parents and child safety advocates while drawing opposition from digital privacy organizations and youth representatives who highlighted the importance of online communities for vulnerable and geographically isolated adolescents.
Prime Minister Anthony Albanese celebrated the policy’s international influence, noting: “Despite initial skepticism, Australia’s framework is now inspiring global replication—a source of national pride.” Denmark has already announced plans to implement similar restrictions for children under 15.
Opposition lawmakers raised concerns about easy circumvention through age verification deception or adult assistance, coupled with migration to less-scrutinized applications. Commissioner Inman Grant acknowledged initial spikes in alternative app downloads but noted no sustained usage increases.
The eSafety Commission plans to introduce pioneering restrictions on AI companions and chatbots in March, further expanding Australia’s digital child protection framework.
