Skip to main content
Lab Notes
Family Safety

Family AI Risk Assessment: A Practical Toolkit for Saudi Parents

PeopleSafetyLab|March 10, 2026|11 min read

The television talks to your daughter when you're not in the room.

It's not a metaphor. Samsung's Neo QLED, like dozens of other smart TVs sold in Riyadh electronics stores, has a built-in voice assistant that listens for wake words. The PlayStation 5 in your son's bedroom uses AI to moderate voice chat. The baby monitor in the nursery runs machine learning to detect crying patterns. The iPad your teenager uses for homework has on-device AI that suggests replies to messages, curates news feeds, and quietly learns the rhythms of their day.

None of this is hidden. None of it is illegal. And almost none of it is something Saudi parents have been trained to think about as "AI" — let alone assess for risk.

Meanwhile, across the city, a different kind of meeting is happening. In a conference room at a major Riyadh bank, a governance officer walks through a 147-control AI risk framework, ticking boxes for data retention, consent mechanisms, and algorithmic transparency. The company has a chief AI officer, a written policy, and quarterly audits. The risk assessment took six weeks.

In the home? Nothing. No framework. No audit. No policy.

Yet the AI in your living room has more intimate access to your children than any enterprise system has to any customer. It hears their bedtime conversations. It watches them get dressed. It knows their homework struggles, their crushes, their anxieties whispered to a voice assistant at 2am.

The home is where AI governance goes to die. It's also where it matters most.

The Governance Gap in Your Living Room

There's a paradox at the heart of AI safety in 2026: the more we regulate AI in institutions, the more we neglect it in homes.

Saudi Arabia's Personal Data Protection Law (PDPL) is among the strongest privacy frameworks in the region. SDAIA's AI Ethics Principles provide clear guidance for organizations deploying algorithmic systems. The Kingdom's National Strategy for Data and AI aims to position Saudi Arabia as a global leader in responsible AI.

But none of these frameworks were designed for a mother in Jeddah trying to decide whether her nine-year-old should have unrestricted access to ChatGPT. None of them address the father in Riyadh who discovers that his child's educational app has been sharing behavioral data with advertisers. None of them help the grandmother in Dammam who doesn't know that her smart speaker is creating voice profiles of every family member.

The average Saudi household now contains between 15 and 25 AI-enabled systems. Not just phones and computers, but air conditioners that learn temperature preferences, refrigerators that track consumption, cars that monitor driver behavior, and toys that hold conversations. A 2024 study by King Saud University found that 73% of Saudi families with children under 12 own at least one "conversational AI" device — a speaker, assistant, or toy that engages in dialogue.

These systems are not trivial. They shape what children learn, how they communicate, and what they believe about the world. A chatbot can answer a curious question about religion, history, or ethics — and parents may never know what answer was given. An algorithm can recommend YouTube videos that gradually radicalize a teenager's worldview. A voice assistant can normalize sharing private thoughts with corporate servers.

We have built elaborate governance structures for AI in banks and hospitals while leaving families to navigate the algorithmic wilderness alone.

The Home AI Inventory: Finding What You're Actually Living With

Risk assessment begins with a simple question: What AI systems are in your home?

This is harder to answer than it sounds. AI has become a background technology, embedded in products that don't advertise themselves as "intelligent." A parent might know that ChatGPT is AI but not realize that their child's math learning app uses adaptive algorithms to personalize lessons — and to collect data on learning patterns.

Here is a practical inventory framework for Saudi families:

Conversational Systems

  • Smart speakers (Amazon Alexa, Google Nest, Apple HomePod, Samsung Bixby)
  • Voice assistants in phones, TVs, cars
  • Chatbots in apps and games
  • AI companions (Character.AI, Replika, etc.)

Content Recommendation Systems

  • YouTube, TikTok, Instagram feeds
  • Streaming services (Netflix, Shahid, Disney+)
  • News apps and aggregators
  • Gaming platforms (PlayStation, Xbox, Roblox)

Educational and Creative AI

  • ChatGPT, Claude, Gemini, and other LLMs
  • AI writing and image generation tools
  • Adaptive learning platforms (Khan Academy, IXL, local platforms like Noon)
  • Language learning apps (Duolingo, Babbel)

Smart Home Systems

  • Security cameras and baby monitors with AI detection
  • Smart TVs with voice control and content recommendations
  • Connected appliances (refrigerators, thermostats, vacuums)
  • Wearable devices (smartwatches, fitness trackers)

Toys and Entertainment

  • AI-enabled toys (CogniToys, Mattel's Hello Barbie, various robots)
  • Gaming consoles with voice/chat moderation
  • AR/VR headsets with environmental tracking
  • Interactive storybooks and learning devices

The inventory is not a one-time exercise. New AI systems enter homes constantly — through app updates, new devices, and "free" services that trade convenience for data. A quarterly review, conducted as a family activity, can keep the inventory current and signal to children that AI is something worth paying attention to.

The Risk Matrix: Scoring What Matters

Not all AI systems pose equal risks. A voice assistant that sets kitchen timers is different from a chatbot that discusses sensitive topics with a lonely teenager. A streaming algorithm that recommends cartoons is different from one that gradually serves increasingly extreme political content to a curious adolescent.

Risk assessment combines two factors: likelihood of harm and severity of impact.

Likelihood considers:

  • How often the child interacts with the system
  • How much personal data the system collects
  • Whether the system has direct communication capabilities
  • How transparent the system is about its operations
  • Whether the system connects to external servers or operates offline

Severity considers:

  • What kind of content or influence the system can deliver
  • Whether the system can affect the child's self-image or mental health
  • Whether the system can expose the child to contact with strangers
  • Whether data collected could be used for profiling or targeting
  • Whether the system could normalize harmful behaviors or beliefs

A simple matrix produces actionable scores:

| System | Likelihood (1-5) | Severity (1-5) | Risk Score | |--------|------------------|----------------|------------| | Smart speaker in shared space | 4 | 3 | 12 | | Unrestricted ChatGPT access for teen | 5 | 5 | 25 | | YouTube Kids with parental controls | 3 | 2 | 6 | | AI toy with voice recognition | 3 | 4 | 12 | | Adaptive math app (school-provided) | 4 | 2 | 8 | | Streaming service with profiles | 4 | 3 | 12 |

Systems scoring above 15 require immediate attention. Those between 10-15 need monitoring and rules. Below 10 may be acceptable with basic awareness.

The scoring is not meant to be precise — it's a conversation starter. What matters is that families develop a shared language for discussing AI risk, rather than treating all technology as either "safe" or "dangerous."

Guardrails by Age: What Works When

A six-year-old and a sixteen-year-old face different AI risks and need different protections. Yet most parental controls are binary: on or off. A more nuanced approach matches guardrails to developmental stages.

Ages 0-6: The Invisible Years At this age, children don't understand what AI is. They talk to voice assistants as if they were people. They assume screens reflect reality. Guardrails should be environmental:

  • Keep conversational AI devices out of bedrooms
  • Disable voice purchasing and sensitive features
  • Use screen time limits and content filters
  • Talk about AI in simple terms: "The computer isn't a person, it's a machine that learned to talk"

Ages 7-11: The Curious Years Children begin seeking information independently. They may ask chatbots questions they wouldn't ask parents. They start forming relationships with digital systems. Guardrails should be collaborative:

  • Create family rules for AI use together
  • Require parental approval for new AI apps
  • Use parental controls on content systems but explain why
  • Discuss AI answers together: "What did the computer tell you? What do you think about that?"

Ages 12-15: The Identity Years Adolescents form identity through digital interaction. Recommendation algorithms shape their worldviews. Social validation comes from AI-curated feeds. Mental health is vulnerable. Guardrails should be dialogic:

  • Shift from control to coaching
  • Discuss how algorithms work and what they want
  • Encourage critical thinking about AI-generated content
  • Monitor for signs of harmful algorithmic exposure (radicalization, self-harm content)
  • Maintain open communication without surveillance

Ages 16-18: The Transition Years Older teenagers need preparation for adult AI interaction. They will soon face AI in universities, workplaces, and civic life. Guardrails should be preparatory:

  • Discuss AI in career and education contexts
  • Teach privacy hygiene and data awareness
  • Explore AI governance and ethics together
  • Begin transitioning to adult-level self-governance

The goal is not to control but to build capacity. By the time a child leaves home, they should understand AI well enough to govern their own relationship with it.

The Family AI Policy: A Starting Template

Most families have unwritten rules about technology. A written policy makes assumptions explicit and gives children clear expectations. It doesn't need to be corporate or formal — a single page, written in Arabic and/or English, can serve as a family constitution for AI.

Here is a template structure:

Our Family's AI Principles

  • What we believe about AI's role in our home
  • What values guide our decisions about technology
  • What we want AI to help us with — and what we don't

Rules by Family Member

  • Age-appropriate guidelines for each child
  • Shared device rules
  • Personal device rules

Data and Privacy Commitments

  • What information we share with AI systems
  • What we keep private
  • How we review and delete data

Content and Communication Guidelines

  • What kinds of AI content are acceptable
  • How we handle AI conversations about sensitive topics
  • What to do when AI says something concerning

Review and Update Process

  • When we revisit this policy (quarterly recommended)
  • How we handle new AI systems
  • How children can propose changes

Consequences and Exceptions

  • What happens when rules are broken
  • How to request exceptions
  • How we restore trust

The policy is not a contract to be enforced punitively. It's a family agreement that evolves as children grow and technology changes. The process of writing it together may be more valuable than the document itself.

The Question That Lingers

The smart speaker still sits in the corner of the living room. The teenager still has ChatGPT on their phone. The baby monitor still watches over the nursery.

AI is not leaving Saudi homes. It will only become more embedded, more invisible, more intimate. The question is not whether families will live with AI, but whether they will do so with eyes open or closed.

The enterprise world has discovered that AI governance is not a constraint on innovation — it's what makes innovation sustainable. Families are beginning to learn the same lesson. A home with AI guardrails is not a home without AI. It's a home where AI serves the family rather than the family serving AI.

In a few years, today's children will be adults making decisions about AI in their workplaces, their communities, and their own families. What they learn now — in the conversations over dinner about why the smart speaker isn't really a person, in the family policy written on a rainy Saturday, in the critical questions asked about chatbot answers — that learning will shape how an entire generation governs the most powerful technology humans have ever built.

The governance gap in the living room is real. But it doesn't have to stay that way.


PeopleSafetyLab helps families navigate AI safety with practical tools, clear frameworks, and the belief that governance belongs in homes, not just headquarters.

P

PeopleSafetyLab

Independent AI safety research for organisations and families in Saudi Arabia and the GCC. All research is editorially independent. PeopleSafetyLab has no consulting clients and does not conduct paid audits.

Share this article: