Australia warns AI companies to enforce age restrictions or face fines, signaling a global push to protect minors from harmful content on chatbots and search engines.
![]() |
| Australia’s eSafety regulator threatens fines of up to A$49.5 million for AI services that fail to block minors from self-harm, violence, or adult content. Image: CH |
CANBERRA, Australia — March 2, 2026:
Australia is positioning itself at the forefront of global efforts to regulate artificial intelligence, warning search engines, app stores, and AI chatbot providers that failure to restrict access to minors could trigger some of the largest penalties seen in tech regulation.
The country’s internet safety regulator, eSafety, has said that from March 9, AI-powered platforms — including search tools like ChatGPT and companion chatbots — must prevent Australians under 18 from accessing pornography, extreme violence, self-harm content, or material promoting eating disorders. Companies failing to comply face fines of up to A$49.5 million ($35 million).
The warning underscores a rising international concern over AI’s effects on youth. Researchers increasingly compare the mental health risks of chatbots to those previously associated with social media, noting that AI tools can manipulate emotions and sustain prolonged engagement. eSafety reported instances of children as young as 10 interacting with chatbots for up to six hours per day, raising alarms about potential long-term harm.
“AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice and entrench young people into excessive usage,” a spokesperson said. eSafety indicated it is prepared to use its full regulatory powers, including targeting gatekeepers such as app stores and search engines that provide access to AI services.
Australia has already taken bold steps to limit online harms for minors. In December, it became the first country to ban social media for under-16s, citing mental health concerns — a move that drew global attention and inspired discussions among other world leaders about similar policies. The AI crackdown appears to follow the same logic: safeguard children proactively, even before widespread incidents emerge.
However, compliance remains a major challenge. A Reuters review of the 50 most popular text-based AI platforms found that only nine had implemented or announced age verification systems. Another 11 platforms employed blanket content filters or blocked Australian users altogether, while 30 showed no visible compliance measures. Companion chatbots in particular lagged, with three-quarters lacking functional age assurance or filtering systems.
High-profile platforms such as OpenAI’s ChatGPT, Anthropic’s Claude, and Replika have begun rolling out age verification or blanket filters. Character.AI cut off open-ended chats for under-18s. Yet tools like Elon Musk’s Grok showed no evidence of age safeguards, raising potential compliance and legal risks amid global scrutiny over harmful content, including synthetic sexualized imagery of children.
Industry representatives caution that the regulations present challenges for AI developers, especially smaller startups. Jennifer Duxbury of DIGI, which helped draft Australia’s AI code, emphasized that companies are ultimately responsible for understanding and meeting their legal obligations.
Experts highlight a deeper problem: many AI tools are still being designed without sufficient attention to safety controls. Lisa Given, director of RMIT University’s Centre for Human-AI Information Environments, said the Reuters findings were unsurprising. “It feels as though … we’re beta testing all of these things for these companies, and they’re trying to see how far society is willing to be pushed,” she said.
Australia’s hardline approach may set a benchmark for global AI regulation, forcing companies to embed age verification and content moderation as standard practice. With the March 9 deadline looming, the world will be watching to see if AI platforms comply or whether Australia will escalate enforcement by targeting app stores and search engines — effectively controlling the gateways through which millions of users access AI.
The crackdown signals a pivotal moment: governments are beginning to assert that AI services must protect vulnerable users or face severe penalties, marking a potential turning point in the global regulation of artificial intelligence.
