Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Close

Australia’s Social Media Minimum Age Act Isolates Youth and Fractures the Open Internet

Credit: Mlenny

Today marks a turning point for Australian youth, as the Social Media Minimum Age Act (SMMA) is set to take effect. While framed as a necessary measure for child protection, its execution relies on blunt instruments that fracture the digital market and, paradoxically, may leave young people less safe than before. Please see this brief for more information on the Act.

For the global technology sector and safety advocates alike, the SMMA is more than just a distant regulatory experiment; it is a warning of how “minimum-age bans” could become a new standard for digital fragmentation, at the expense of U.S. firms and their users.

Targeting Innovation

The Act’s scope is ostensibly broad, covering services whose “sole” or “significant” purpose is social interaction. In practice, however, the eSafety Commissioner’s target list is far narrower and disproportionately American. Eight of the ten services currently scoped by the regulator are U.S.-owned. The scale of this change is massive: According to research conducted by the University of Sydney, Instagram and YouTube are the most frequently used forms of social media among young people aged 12-17, respectively used daily by 64 percent and 56 percent, while one in two are daily Snapchat users. 

Meanwhile, the legislation grants the Australian Minister for Communications sweeping discretion to designate or exempt services with little transparency or requirement for independent review. This structure allows for an arbitrary selection of “in scope” services that appears to be driven as much by political visibility as by actual harm reduction. By focusing on the “most popular” services rather than a neutral risk assessment, the Act risks creating a de facto form of discrimination, as its selective enforcement penalizes the digital services who have invested most heavily in safety features, while leaving a lane wide open for less-regulated competitors.

The Privacy Paradox

To comply with the Act’s mandate to take “reasonable steps” to block access for users under 16, in-scope companies face a daunting operational reality. Because the law applies to both new and existing accounts, services must effectively reassess their entire Australian user base.

In the face of severe financial penalties—up to AUD $49.5 million (~USD $32.2 million) for major violations—companies will likely be forced toward the strictest forms of age assurance. This inevitably points to invasive measures like facial analysis or government ID verification, the technology for which remains nascent and implementation of which on such a massive scale is untested.

Here lies the paradox: while aiming to protect children, the law necessitates the creation of massive repositories of sensitive identity data. Security experts warn that these centralized databases will become prime targets for hackers, putting the very privacy the Act seeks to uphold at risk, especially for vulnerable young users.

A Step Backward for Safety

Perhaps the most troubling aspect of the SMMA is that it may actively undermine youth safety. By banning accounts for those under 16, the law overrides parental discretion and renders parental safety tools like content filters and activity logs useless, as these features require a linked account to function.

Determined minors will not simply log off; tech experts warn that the ban will drive young users toward opaque, less regulated corners of the internet where safety standards are non-existent. They will use VPNs, share accounts, declare false identities, or simply migrate to other platforms. Early reports indicate Australian teens are already flocking to services like Yope and Lemon8, which sit outside the current regulatory spotlight but have been put on notice by the eSafety Commissioner. 

This shift exemplifies the “safety paradox” experts have warned about. As Dr. Catherine Page Jeffery of the University of Sydney notes, “there is a very real possibility that, if young people do migrate to less regulated platforms, they become more secretive about their social media use… and therefore if they do encounter concerning material… they won’t talk to their parents about it.” 

Furthermore, the ban threatens to isolate marginalized youth, particularly in regional and LGBTQ+ communities, who rely on online fora for identity-affirming support networks not available in their physical environments.

Stopping the Contagion

Australia is actively promoting this model in multilateral forums, and countries like New Zealand, Indonesia, and several EU member states are already considering similar frameworks. If this “digital border” approach is normalized, U.S. companies could face a fragmented global landscape of contradictory age-gating requirements that stifle innovation and free expression.

With the deadline now upon us, the government must prioritize a full feasibility and impact assessment over immediate penalties. A more effective path forward lies in “Safety by Design,” a principle introduced by the eSafety Commissioner herself, which involves enhancing default protections and parental tools rather than crude access prohibitions that fracture the open internet.

Australia’s intent to protect its youth is genuine, but good intent does not excuse bad policy. As the SMMA goes live, the U.S. has a strategic interest in ensuring this flawed model does not become the global standard.

Digital Trade

Companies rely on clear, predictable rules that facilitate digital trade to export their products and services around the world. These rules include balancing the competing interests between encouraging investment and enabling information access; promoting the free flow of information online; and maintaining balanced intermediary liability regimes.