UK regulators are pressuring Meta, TikTok, Snap and YouTube to strengthen age checks and keep children off social media, signaling stricter enforcement under Britain’s Online Safety Act.
![]() |
| Britain’s media and privacy regulators warn major social media companies to strengthen age checks and protect children online or face heavy fines. Image: CH |
LONDON, United Kingdom — March 12, 2026:
British regulators have intensified pressure on major social media companies, demanding stronger measures to prevent children from accessing platforms that fail to enforce minimum age requirements.
The warning, issued jointly by the UK’s communications regulator Ofcom and the Information Commissioner's Office, targets some of the world’s largest digital platforms including Meta, TikTok, Snap, and YouTube.
The move signals a tougher phase in Britain’s regulation of social media under the Online Safety Act, as policymakers grow increasingly concerned about children’s exposure to harmful or addictive online content.
UK regulators say algorithmic recommendation systems used by social platforms are amplifying risks for minors by promoting highly engaging but potentially harmful content.
Ofcom chief executive Melanie Dawes warned that major platforms—despite being global household names—are not prioritizing child safety in their product design.
“These online services are household names, but they're failing to put children's safety at the heart of their products,” Dawes said, adding that regulators will act if companies fail to respond quickly.
Authorities have instructed companies including Facebook, Instagram, Snapchat, TikTok, YouTube, and gaming platform Roblox to outline by April 30 how they will strengthen age verification systems and improve protections for younger users.
Alongside Ofcom’s action, the UK privacy regulator has issued an open letter urging social media companies to adopt “modern, viable” age-assurance technologies capable of preventing children under 13 from accessing services designed for older users.
Paul Arnold, chief executive of the Information Commissioner’s Office, said technological solutions already exist to enforce age restrictions more effectively.
“There’s now modern technology at your fingertips, so there is no excuse,” Arnold said.
Regulators want platforms to tighten age checks, restrict direct contact between minors and unknown adults, ensure safer algorithmic feeds, and stop experimenting with new features on children.
Britain’s enforcement framework includes significant financial penalties. Ofcom can impose fines of up to 10% of a company’s global revenue, while the privacy watchdog can levy penalties of up to 4% of annual global turnover.
The ICO has already shown its willingness to act. Last month, it fined Reddit nearly £14.5 million for failing to introduce meaningful age checks and for unlawfully processing children’s data.
The UK crackdown reflects a broader global push to tighten rules governing children’s access to social media. British lawmakers are currently considering proposals to ban under-16s from social platforms—an approach that mirrors measures introduced in Australia.
If implemented, such restrictions would mark one of the most aggressive attempts by a Western government to regulate youth participation in digital platforms.
For technology companies, the regulatory pressure in Britain could become a testing ground for future global rules. Governments across Europe, North America and Asia are increasingly examining how social platforms design algorithms, collect data and verify user ages.
For firms like Meta, TikTok, Snap and YouTube—whose business models depend heavily on user engagement—the challenge will be balancing stronger safeguards with maintaining growth and advertising revenue.
As regulators move from policy debates to enforcement, the UK’s approach may signal a new era in which protecting children online becomes a central requirement for operating global social media platforms.
