Australia's proposed law banning social media for children under 16 raises concerns about increased risks. YouTube argues that the move could make children less safe online, while the government insists it is necessary for long-term protection.
Sydney, Australia – December 6, 2025:
In a move to combat the growing concerns over online safety, the Australian government is set to implement a controversial law that will ban children under 16 from using social media platforms such as YouTube, Facebook, Instagram, and TikTok. The law, scheduled to take effect on December 10, has sparked intense debate, with video-sharing giant YouTube warning that the policy could do more harm than good.
YouTube’s public policy team has voiced strong opposition to the proposed law, claiming it was rushed and could undermine the safety mechanisms the platform has spent years building. According to YouTube, banning children under 16 from using the platform will remove critical parental controls, such as content filters and the ability to block harmful channels. Parents, who currently rely on these controls, will be unable to monitor or guide their children’s media consumption, leaving them vulnerable to unregulated content.
Additionally, YouTube pointed out that features designed to promote healthy habits—such as reminders to "take a break" or "go to bed"—would no longer be available to children once the law is in effect. These features are tied to having a user account, and without an account, children will still be able to access videos without any limitations.
Annika Wells, Australia’s Communications Minister, responded to YouTube’s concerns by emphasizing that the government was simply holding platforms accountable for the risks they expose children to. “It’s quite strange,” Wells said, “if YouTube is the one reminding us that their platform is not safe, then YouTube should fix the problem.”
Wells argues that today’s children—especially those in Generation Alpha, the cohort born after 2010—are growing up in an era of uninterrupted digital access, where algorithms and notifications are engineered to keep them engaged in an endless cycle of content. The pervasive nature of this “dopamine drip,” as Wells described it, is harmful to children’s development, leading to attention issues and exposure to predatory content.
The minister also pointed out that previous generations have dealt with online harm, but the scale and impact of today’s digital technology are far greater. “With one law, we can save Generation Alpha from the vacuum of space into which predatory algorithms drag them,” she argued.
As the December 10 deadline approaches, the Australian government is already seeing signs that teenagers are shifting to alternative social media platforms. Apps such as Lemon 8 and Yop, both owned by TikTok, have seen a noticeable uptick in downloads, as younger users look for ways to bypass the ban. This trend presents a new challenge for regulators, who may now have to focus on monitoring a broader range of platforms.
While the Australian Communications and Media Authority (ACMA) is already investigating these apps, the rapid pace of change in the tech world raises a key question: will the law be effective at curbing risks, or will it simply drive young users toward platforms with less robust safety controls?
The conflict between the Australian government and major tech companies highlights a deeper issue in the ongoing conversation about child safety online. YouTube’s concern that the law would “weaken strong protection mechanisms” suggests that platforms are reluctant to take responsibility for their role in fostering environments that expose children to harm. While many tech companies, including YouTube, have developed safety features, their effectiveness remains a point of contention. Critics argue that these platforms have consistently failed to address the root causes of online dangers, such as algorithm-driven content that prioritizes engagement over safety.
Rachel Lord, Senior Manager of Public Policy at Google and YouTube Australia, echoed these concerns, stating that the law would not deliver the promised safety benefits. She claimed that the policy would make children less safe by removing access to valuable parental tools and content filters. In her statement, Lord emphasized that both parents and teachers shared these concerns, acknowledging the challenges of protecting children in a constantly evolving digital landscape.
The introduction of fines up to A$49.5 million for non-compliance with the new law puts immense pressure on companies to adhere to the age restrictions. However, many worry that these financial penalties could backfire, leading platforms to adopt overly cautious approaches that limit access to their services for Australian users entirely.
Despite the potential drawbacks, the law represents Australia’s latest attempt to address the growing crisis of online child safety. It also reflects a global trend toward greater scrutiny of tech companies, as governments grapple with how to regulate powerful platforms that shape modern life.
While the new law may bring some immediate changes, it is clear that there is no simple solution to the complex problem of child safety online. Australia’s decision to ban social media use for children under 16 may curb some harmful online behavior, but it will also present new challenges, especially as younger users find alternative ways to engage with digital content.
As this law comes into effect, it will be important for both the government and tech companies to remain flexible and adaptive. The law’s true impact won’t be clear for months, if not years, and adjustments may be necessary to ensure that children are protected without inadvertently creating more risks. The conversation surrounding digital safety is far from over, and Australia’s experiment may be a bellwether for similar efforts around the world.
