In 2017, a 14-year-old girl in North London named Molly Russell died by suicide. In the six months before her death, she had interacted with 16,300 pieces of content on Instagram. Of those, 2,100 concerned self-harm, depression, or suicide — material she had not sought, but that the platform's algorithm had served to her, progressively, as it learned that she would stay with it.
Her father, Ian Russell, spent years pressing for an inquest. In September 2022, a UK coroner ruled that Instagram and Pinterest content had contributed to his daughter's death — a ruling that made legal history. In 2025, the Molly Rose Foundation published a report documenting that Instagram and TikTok were still recommending suicide and self-harm content to at-risk teenagers "at industrial scale." Eight years after Molly Russell died.
This is not a story about one algorithm's failure. It is a story about what happens when an optimization system encounters a teenager in crisis — and why the system, built to maximize engagement, performs at its highest capacity precisely on the users it most harms.
The Numbers No One Was Expecting
The timeline of the teen mental health crisis is precise enough to be studied like a geophysical event. Something happened between 2010 and 2015 — and it happened across borders, across cultures, across measurement methodologies.
In the United States, the CDC's 2023 Youth Risk Behavior Survey found that 39.7 percent of high school students had experienced persistent sadness or hopelessness in the past year — sadness severe enough that they could not engage in normal activities for at least two weeks. Among female students, the figure was 53 percent. Among LGBTQ+ youth, 65 percent. Emergency room visits for self-harm among American girls aged 10 to 14 increased by 42 percent between 2020 and 2022 alone. Between 2010 and the early 2020s, adolescent depression rates increased by approximately 45 percent; anxiety diagnoses by 61 percent.
The United Kingdom showed the same trajectory. Canada did. Australia did. Across 37 countries surveyed by the PISA international education study, adolescent loneliness rose sharply after 2012 — in every region except Asia, where smartphone adoption patterns differed. This international simultaneity matters enormously. A crisis appearing in the same countries, at the same time, does not point toward domestic explanations — economic stress, school pressure, changing parenting styles — it points toward something globally adopted between 2010 and 2015.
The smartphone, equipped with a front-facing camera and connected to social media platforms, was globally adopted among adolescents between 2010 and 2015.
What Meta's Own Research Found
In September 2021, Frances Haugen, a former Facebook product manager, copied tens of thousands of internal documents and disclosed them to the Wall Street Journal. The revelations that followed — the "Facebook Files" — included internal research that Meta had conducted on Instagram's effects on teenage girls and had chosen not to act on.
The numbers from Meta's own researchers are worth quoting in full, because they were not produced by critics or regulators but by the company's internal teams:
Thirty-two percent of teenage girls said that when they felt bad about their bodies, Instagram made them feel worse. Thirteen and a half percent said Instagram made suicidal thoughts worse. Seventeen percent said it contributed to their eating disorders. One internal presentation concluded: "We make body image issues worse for one in three teen girls."
More than this, Meta's researchers documented the mechanism — not just the harm but the machine that produced it. When girls began consuming content related to eating disorders or body dissatisfaction, the algorithm interpreted their engagement as a positive signal and delivered more of the same. As one internal document put it: "They get more and more depressed. And it actually makes them use the app more. And so they end up in this feedback cycle where they hate their bodies more and more."
This research existed. It was shared internally. And Facebook continued optimizing for engagement.
In 2024, Arturo Bejar — a former Meta engineering director who had worked specifically on teen safety — testified before Congress. His statement was direct: Meta and its CEO "do not care about the harm teens experience on their platforms." He described emails from Meta executive Nick Clegg to Mark Zuckerberg describing "profound gaps with addiction, self-harm, bullying and harassment." Zuckerberg did not respond to those emails, according to Bejar's account.
The Comparison Machine
Social comparison — the tendency to evaluate oneself relative to others — is not a pathology. It is a deeply human mechanism, evolved for social animals living in groups where status and belonging determined survival. The problem with social media is not that it activates social comparison. The problem is that it industrializes it: makes it continuous, visual, algorithmically curated for maximum emotional intensity, and impossible to escape.
A meta-analysis published in 2024, synthesizing 83 studies with 55,440 participants, found a significant correlation between higher online social comparison and greater body image concerns. The correlation between social comparison and eating disorder symptoms was also significant. Among teenage girls aged 15 to 17, research found that more than 27 percent had contemplated suicide as a result of appearance-based comparison.
The specific mechanism that Instagram and TikTok exploit is upward comparison — comparing oneself against people who appear more attractive, more popular, more successful. The platforms are not neutral mirrors. They are curated showcases, algorithmically populated with content that drives engagement. Highly engaging content tends to be emotionally stimulating: beautiful, surprising, provocative. For teenage girls, "beautiful" on social media has become a narrow, filtered, increasingly AI-enhanced aesthetic that bears little resemblance to ordinary human appearance and is increasingly indistinguishable from it.
Surgeon General Vivek Murthy, writing in the New York Times in June 2024, stated plainly that "social media has emerged as an important contributor" to the mental health emergency among young people — and called on Congress to mandate warning labels on social media platforms comparable to tobacco and alcohol warnings. The U.S. Senate passed the Kids Online Safety Act 91-3 in July 2024. As of this writing, it has not become law.
Sleep, and the Notification That Arrives at 2 AM
One of the most documented and least discussed harms of algorithmic social media is what it does to sleep. Nearly 65 percent of adolescents report inadequate sleep during weekdays. Twenty-nine percent experience insomnia symptoms. These are not marginal figures — they describe a generation operating with chronic sleep deprivation, a condition with well-established consequences for mood regulation, impulse control, academic performance, and mental health.
The mechanisms are multiple. Blue light from screens suppresses melatonin production by approximately 46 percent, delaying sleep onset. Emotionally stimulating content — the kind the algorithm is optimized to deliver — activates the nervous system in ways that make physiological relaxation neurologically difficult. And the notification architecture of social platforms — the 192 daily notifications that research finds the average teenager receives — means that the device demands attention even after a teenager has resolved to stop.
Selena Rodriguez was 11 years old when she died by suicide in 2021. The lawsuit filed against Meta and Snap on behalf of her family described "severe sleep deprivation caused and aggravated by her addiction to Instagram and Snapchat, and the constant 24-hour stream of notifications and alerts received." She was 11, and the platforms had determined that her continued engagement was worth more than her sleep.
The legal reckoning is underway. As of January 2026, a multi-district litigation against Meta, TikTok, Snap, and YouTube — involving more than 1,000 individual plaintiffs, hundreds of school districts, and dozens of state attorneys general — moved to trial. TikTok settled on the eve of proceedings. Australia, in December 2025, became the first country to ban children under 16 from social media entirely, with fines of up to $50 million for platforms in violation.
What no law has yet answered — what no settlement or regulatory action has addressed — is the question of what has already happened to a generation that grew up inside these systems. For the next piece in this series, we look at a threat that the algorithm did not create but that AI has dramatically amplified: the use of synthetic media to deceive, manipulate, and harm children and their families.
This is Part 4 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.