Is Instagram Doing Enough to Shield Teens From Unwanted Content?

 A Meta survey found that 19% of Instagram users aged 13 to 15 reported seeing unwanted content, according to court documents released in a U.S. lawsuit.

Instagram teen safety concerns grow
A 2021 Meta survey cited in federal court shows significant teen exposure to unwanted content on Instagram, intensifying scrutiny of the company’s safety policies. Image: CH


Tech Desk — February 24, 2026:

Nearly one in five young teens on Instagram reported seeing unwanted content on the platform, according to court filings made public in a federal lawsuit in California — a disclosure that adds to growing scrutiny of parent company Meta Platforms.

The documents, reviewed by Reuters, include excerpts from a March 2025 deposition of Instagram head Adam Mosseri. In testimony, Mosseri acknowledged that 19% of users aged 13 to 15 told Meta in a 2021 survey that they had seen content on Instagram that they did not want to view.

Meta spokesperson Andy Stone said the statistic was drawn from user survey responses about their experiences, rather than from an independent audit of posts on the platform.

The survey findings surface as Meta faces thousands of lawsuits in U.S. federal and state courts alleging that its platforms — including Facebook — are designed in ways that can negatively affect young users.

Lawmakers and regulators globally have questioned whether the company has done enough to protect minors online. The newly public documents could play a role in determining how much Meta knew internally about teen experiences and when.

In addition to the 19% figure, about 8% of teens surveyed in 2021 reported seeing posts involving self-harm or threats of self-harm, according to Mosseri’s deposition.

A separate internal memo dated January 20, 2021, also released as part of the lawsuit, shows a Meta researcher recommending that the company focus on teen users because they act as “catalysts” within households.

“If we’re looking to acquire (and retain) new users we need to recognize a teen's influence within the household,” the memo stated, suggesting that teenagers help drive platform adoption among siblings and parents.

The document highlights the commercial importance of teen engagement at a time when safety concerns were already emerging.

Meta did not immediately respond to requests for comment regarding the internal memo.

Mosseri said in his deposition that much of the unwanted content reported by teens was shared through private messages, creating challenges for oversight.

“A lot of people don't want us reading their messages,” he said, underscoring the tension between monitoring harmful material and maintaining user privacy.

In late 2025, Meta said it would remove certain types of policy-violating content from teen accounts, including material generated using artificial intelligence, while allowing exceptions for educational and medical contexts.

“We’re proud of the progress we’ve made, and we’re always working to do better,” Stone said.

The court disclosures offer a rare look at internal data and strategy as Meta navigates intensifying legal and political challenges. The 19% statistic is likely to become a focal point in debates about platform accountability and youth protection.

As lawsuits proceed and policymakers weigh stronger oversight measures, the question facing Meta — and the wider social media industry — is whether existing safeguards are sufficient to address risks young users say they are experiencing.

Post a Comment

Previous Post Next Post

Contact Form