How Australia found itself battling big tech over social media for children

Australia is poised to implement the world’s first comprehensive ban on social media access for users under 16, marking a radical escalation in global efforts to regulate technology giants. The groundbreaking legislation—set to take effect despite vigorous industry opposition—requires platforms to implement “reasonable steps” to prevent underage account creation, with penalties reaching A$49.5 million for serious violations.

The move emerges against a backdrop of deteriorating trust in social media companies, exemplified by former Facebook Australia CEO Stephen Scheeler’s transformation from digital optimist to industry critic. “There’s lots of good things about these platforms, but there’s just too much bad stuff,” Scheeler told the BBC, reflecting a growing consensus among regulators worldwide.

Tech giants including Meta, TikTok, Snapchat, and YouTube have mounted coordinated resistance, arguing through trade group NetChoice that the ban constitutes “blanket censorship” that will leave youth “less informed, less connected, and less equipped to navigate the spaces they will be expected to understand as adults.” They particularly challenge the technological feasibility of age verification and advocate instead for parental control mechanisms.

Australian Communications Minister Anika Wells remains uncompromising, noting that companies have had “15, 20 years in this space to do that of their own volition now, and… it’s not enough.” Her stance has attracted international attention, with EU nations, Fiji, Greece, Malta, Denmark, Norway, Singapore, and Brazil actively exploring similar measures.

The regulatory pressure coincides with major legal challenges in the United States, where a landmark January trial will consolidate hundreds of claims alleging social media platforms deliberately designed addictive features while concealing known harms to adolescent mental health. Meta founder Mark Zuckerberg and Snap CEO Evan Spiegel have been ordered to testify personally in cases examining platforms’ role in teen sexual exploitation, body dysmorphia, and suicide.

In response to mounting scrutiny, companies have introduced age-restricted versions of their platforms. YouTube deployed AI-based age estimation technology, Snapchat implemented default safety settings for teens, and Meta launched Instagram Teen accounts with enhanced privacy protections. Yet whistleblowers like former Meta engineer Arturo Béjar maintain these measures remain largely ineffective, with September research indicating nearly two-thirds of new safety tools fail to provide meaningful protection.

Industry analysts note companies walk a delicate line—complying sufficiently to avoid penalties while ensuring implementation isn’t so successful that it inspires global replication. Carnegie Mellon’s Professor Ari Lightman observes that even maximum fines represent merely “a drop in the bucket” for companies prioritizing access to future user generations.

Despite implementation challenges, Scheeler characterizes this moment as social media’s “seatbelt moment”—acknowledging that “even imperfect regulation is better than nothing, or better than what we had before.” As Australia becomes the world’s testing ground for youth social media restrictions, the outcome will likely shape digital regulation for the next generation.