Tristan Harris spent three years as a design ethicist at Google, tasked with thinking about the ethics of how technology is designed. What he concluded — and has spent the decade since trying to explain to anyone who will listen — is simpler and more disturbing than most people want to hear: "There are a billion people on the other side of the screen," he has said, "and they have a thousand engineers working to get you to stay."
For the billion people on the receiving end of those thousand engineers, the experience feels like preference. It feels like interest. It feels like choosing what to watch. This is the elegant deception at the heart of the modern recommendation engine: it constructs a version of you — your tastes, your vulnerabilities, your deepest preoccupations — and then feeds that version back to you so seamlessly that the distinction between choosing and being chosen for disappears.
When that billion includes children, the stakes of the deception change entirely.
How the Machine Builds Its Model of Your Child
Every major platform's recommendation system operates on a variation of the same underlying logic: predict what the user will engage with next, and show it to them before they consciously decide they want it.
TikTok's For You Page — the feed that delivers content to users without requiring them to follow any accounts — operates through what the company calls an "interest graph." When a new video is uploaded to the platform, TikTok's system runs a trial: the video is shown to a small cohort of users whose past behavior suggests they might respond to it. Their engagement signals — how much of the video they watched, whether they replayed it, whether they shared or commented — determine whether the video reaches broader audiences. A 100 percent watch-through rate is the most powerful positive signal the system recognizes. This creates a feedback loop that rewards not depth or quality but whatever triggers the most complete, compulsive attention.
The algorithm does not interpret content. It interprets behavior. It cannot read a video or understand what a user is watching. What it can do, with extraordinary precision, is recognize which behavioral patterns predict continued engagement — and engineer the feed to produce those patterns.
YouTube's recommendation engine works differently but toward the same end. Its two-stage architecture first retrieves candidate videos from a pool of billions, then ranks and sorts them using deep neural networks trained on historical viewing behavior. The system's question is not merely "will you watch this video?" It is: "will watching this video cause you to watch the next one, and the one after that?" Session extension — keeping you inside the platform — is the objective function.
The Addiction Threshold That TikTok Knew About
In 2024, litigation against TikTok produced a trove of internal documents that the company had fought strenuously to conceal. Among them was a piece of internal research that should be required reading for every parent in the world.
TikTok's own researchers had identified the precise behavioral threshold at which a user "is likely to become addicted to the platform": 260 videos viewed. In typical usage patterns, a new user reaches this threshold in under 35 minutes.
The platform had this information. The platform continued operating the same way.
Internal communications included a statement from TikTok's Head of Child Safety Policy — not a critic, not a regulator, but an employee whose job was the safety of children on the platform — acknowledging that the app's algorithm "keeps children engaged at the expense of other essential activities," including sleep, homework, and in-person socialization.
There were also documents describing how TikTok's algorithm had been "rejiggered" to amplify users the company deemed attractive — actively promoting a narrow beauty standard that internal documents acknowledged "could negatively impact" young users. When engineers proposed a non-personalized feed that might reduce compulsive use, leadership blocked it, citing business impact. A time-limit tool that prompted a pause after 60 minutes — the feature marketed loudly to parents and regulators as evidence of platform responsibility — reduced daily usage by approximately 1.5 minutes. It remained in the product because it looked like accountability while functioning as theater.
What the Algorithm Does to a Developing Brain
The adolescent brain is not a small adult brain. It is a distinct developmental stage characterized by a structural mismatch: the subcortical emotion-processing regions — the amygdala, the ventral striatum — mature early and intensely, making social rewards like approval, attention, and admiration neurochemically powerful in ways they never will be again. The prefrontal cortex, responsible for impulse control, future planning, and the ability to override the immediate reward for a longer-term goal, does not reach full development until the mid-twenties.
Recommendation algorithms are, whether by design or by optimization, targeting precisely this developmental window.
At New York-Presbyterian Hospital, researchers found that adolescents who habitually check social media show measurable structural differences in the brain regions associated with reward processing and impulse regulation — the amygdala responds more intensely, the prefrontal cortex shows altered development. This is not metaphorical. The brain is physically rewiring itself around the feedback loops the algorithm has engineered.
The mechanism is what behavioral scientists call variable reward scheduling — the same psychological architecture that makes slot machines compulsive. The social media feed does not deliver consistent rewards. It delivers unpredictable ones: sometimes a post gets a flood of likes, sometimes silence. Sometimes the next video is captivating, sometimes it is dull. This unpredictability, this variability in the reward, is what produces the compulsive return behavior. Aza Raskin, the designer who invented infinite scroll — the feature that removed the natural bottom-of-page stopping point from social feeds — has said publicly that he regrets it. He understood, too late, that he was removing the moment at which a user could choose to stop.
The 260-Video Problem, In Human Terms
It is worth sitting with the math. Your child opens TikTok for the first time. Within 35 minutes, the platform's own research suggests, they have crossed the threshold at which addiction becomes likely. They did not choose this. The platform chose it for them, by optimizing an algorithm over years of behavioral data from billions of users to find precisely the configuration that produces this outcome.
This is the central paradox of the recommendation engine: it feels like freedom because it is responsive to you. It shows you what you want. But what you want — what the algorithm has learned about your vulnerabilities, your interests, your emotional responses — was derived from data collected without your meaningful awareness, processed by systems you cannot examine, and optimized toward an objective (your continued engagement) that may have very little to do with your wellbeing.
For children, who have fewer cognitive resources to resist engineered persuasion, less experience recognizing manipulation, and brains that are developmentally primed to value social rewards above almost everything else, the algorithm is not just a content delivery system. It is, as some researchers have begun to say plainly, a behavior modification system — one that operates continuously, invisibly, and without parental awareness.
In the next piece, we examine what the algorithm does when it encounters a child who is already struggling — and why the most vulnerable teenagers are the ones the system serves with the greatest precision.
This is Part 3 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.