Skip to main content
Lab Notes
Regulatory

The EU AI Act: What Saudi Enterprises Need to Know

Understanding the implications of EU AI Act for Saudi organizations with EU operations

Nora Al-Rashidi|February 24, 2026|12 min read|intermediate

Consider a Saudi financial technology company that has spent the better part of three years building an AI-driven credit-scoring platform. The system works well. It has been approved by SAMA, its documentation is clean, its governance structure passes muster under SDAIA's AI Ethics Principles. Then, late in 2025, the company's legal team raises a question that no one had thought to ask: does the platform evaluate the creditworthiness of any EU residents? It does — a small but meaningful slice of the user base holds dual nationality or residency in Germany and France. The question that follows is less comfortable than the first: are you in compliance with the EU AI Act?

This scenario, which is not hypothetical in its general shape, captures the situation facing a growing number of Saudi enterprises. The EU AI Act — the first comprehensive, binding framework for artificial intelligence governance to carry the force of law anywhere in the world — entered into force in February 2025. Its prohibitions on the most dangerous AI practices became enforceable in August of the same year. The requirements for high-risk AI systems, including those used in employment, finance, and essential services, activate in August 2026. Full implementation follows in August 2027. What matters for organizations headquartered in Riyadh, Jeddah, or Dammam is a fact that Brussels made explicit from the beginning: this law does not stop at EU borders.

Extraterritorial by Design

The EU AI Act applies to providers who place AI systems on the EU market, regardless of where those providers are established. It applies to deployers who use AI systems within the EU. And it applies to providers and deployers located outside the EU when the outputs of their AI systems are used inside the EU. The jurisdictional logic is identical to that of the GDPR, which Saudi organizations have navigated, with varying degrees of success, since 2018. The operative test is not where the company is registered. It is where the effect is felt.

This matters because Saudi enterprises have expanded their EU footprints substantially in recent years. Vision 2030 has pushed diversification into global financial markets, technology partnerships, tourism infrastructure, and logistics — much of it touching EU counterparties. A Saudi company that manages AI-powered recruitment tools for a European subsidiary, or that deploys a chatbot trained on EU customer data, or that provides AI-generated risk assessments to European institutional clients, is within the scope of the Act. The question is not whether the regulation applies; for many organizations, it already does. The question is what compliance actually requires.

A Framework Built on Risk

Brussels classifies AI systems into four categories, arranged by the harm they can cause to individuals and society. The architecture of the regulation flows entirely from this classification, and understanding it is the precondition for everything else.

At the top sits a category of prohibited practices — AI applications that the EU has determined pose unacceptable risks and must not exist in the market at all. Systems that use subliminal techniques to manipulate human behavior against users' interests. Government-operated social scoring systems that evaluate citizens on the basis of their conduct or personal characteristics. Biometric categorization systems that infer sensitive attributes — race, political opinion, religious belief — from physical data. Real-time remote biometric identification in publicly accessible spaces, with narrow exceptions carved out for law enforcement. These prohibitions took effect in August 2025. Any organization whose AI portfolio includes something that resembles these descriptions faces not merely a compliance gap but a structural legal problem.

Below prohibited practices sits the high-risk category, and this is where the bulk of the regulation's compliance machinery lives. Brussels classifies as high-risk those AI systems deployed in consequential domains: critical infrastructure, educational assessment, employment and worker management, access to essential private and public services, law enforcement, migration and border control, administration of justice. These systems are not banned — the EU's position is that they can be deployed responsibly — but they carry a demanding set of obligations. Operators must establish and maintain ongoing risk management systems, not as a one-time exercise but as a continuous process of identification, evaluation, and mitigation. Training data must be governed rigorously, with documented processes for detecting and correcting bias. Technical documentation must be comprehensive enough to allow regulators to assess conformity. Human oversight must be genuine — not a checkbox, but a structural capacity for natural persons to monitor system behavior and intervene when needed.

A third tier imposes transparency obligations on systems whose primary risk is deception. Chatbots must disclose that users are interacting with AI. Systems that generate synthetic media — deepfakes, AI-authored text presented as human-written — must label their outputs. Emotional recognition systems must inform subjects that they are being assessed. These requirements are narrower in scope but more immediately visible to customers, and they create compliance surface in domains — customer service, marketing, communications — where many Saudi enterprises have already deployed AI at scale.

The fourth category, minimal risk, covers systems like spam filters, recommendation engines, and AI-assisted gaming. The Act imposes no mandatory requirements here, though it encourages adherence to voluntary codes of conduct.

The Compliance Timeline

The Act does not arrive all at once. Its obligations phase in over a three-year period, giving organizations time to adapt — but only if they begin now.

| Milestone | Date | |-----------|------| | Act entered into force | February 2025 | | Prohibited practices ban enforceable | August 2025 | | High-risk system requirements active | August 2026 | | Full implementation | August 2027 |

The August 2025 deadline for prohibited practices has already passed. Any Saudi enterprise that has not reviewed its AI portfolio against the prohibited categories is already behind. The August 2026 deadline for high-risk systems is the more consequential near-term pressure point — and given the documentation, governance, and conformity assessment work those requirements demand, organizations that begin preparing in mid-2026 will not have enough time.

Three Exposure Scenarios

The extraterritorial scope of the Act creates distinct compliance profiles depending on how a Saudi organization relates to the EU market. Understanding which scenario applies — or which combination — determines both the urgency and the nature of the response required.

The clearest case is direct EU operations: a Saudi parent company with subsidiaries, offices, or operational infrastructure in EU member states. Here, full compliance is not a question of extraterritorial reach but of direct territorial application. Any AI system used in those EU operations falls within the Act. The organization must appoint an authorized representative in the EU — a legal entity that can interface with regulators and bear formal responsibility for compliance obligations. Market surveillance authorities in EU member states have enforcement jurisdiction, and the penalties for non-compliance are substantial: fines for high-risk system violations can reach thirty-five million euros or seven percent of global annual turnover, whichever is higher.

The second scenario covers organizations that provide AI-powered products or services to EU customers without maintaining a direct EU presence. Under the market-placement logic the Act employs, providing such services makes the Saudi company a "provider" within the EU regulatory framework. Conformity assessment procedures must be established before a high-risk system reaches EU users. For systems in the most sensitive subcategories, CE marking — the EU certification that indicates a product meets applicable requirements — is required before deployment. This creates an operational burden that many Saudi technology companies have not yet anticipated, because it demands engagement with external assessment bodies, not merely internal documentation.

The third scenario is the most easily overlooked. If a Saudi organization's AI systems process personal data belonging to EU residents — whether in financial transactions, healthcare services, logistics, or any other domain — the EU AI Act supplements rather than replaces the GDPR obligations that already apply. The additional layer requires documentation of AI decision-making processes in enough detail to demonstrate that automated decisions affecting EU data subjects meet the Act's standards for transparency and human oversight. For organizations already working through GDPR compliance, this is a compounding challenge, not a separate one.

Where SDAIA Alignment Helps — and Where It Falls Short

Saudi organizations that have invested in aligning with SDAIA's AI Ethics Principles are not starting from zero. The alignment between SDAIA's framework and the EU AI Act's substantive requirements is genuine and meaningful. SDAIA's emphasis on fairness maps to the Act's requirements for bias mitigation and non-discrimination in training data. SDAIA's privacy principles reinforce the GDPR posture that EU compliance already demands. The transparency and human oversight requirements that SDAIA encourages correspond directly to what the EU Act mandates for high-risk systems.

This alignment reflects a deliberate convergence. Saudi Arabia's AI governance architecture was developed with awareness of global standards, and SDAIA has worked to position the Kingdom's framework within the emerging international consensus rather than against it. For organizations that have treated SDAIA's principles as a serious governance commitment rather than a reputational exercise, the EU AI Act's requirements will feel familiar in spirit — if not yet fully met in procedural detail.

The gaps, however, are real. SDAIA's framework does not require the formal conformity assessments that the EU Act mandates for high-risk systems. It does not specify CE marking. It does not establish the kind of regulatory enforcement architecture — with designated national competent authorities, market surveillance powers, and cross-border coordination mechanisms — that Brussels has built. An organization that can demonstrate SDAIA alignment has a credible baseline and a genuine head start, but it will still need to undertake systematic documentation work, establish formal risk management processes, and in some cases engage notified bodies for conformity assessment.

Building a Compliance Position

The practical work of EU AI Act compliance for a Saudi enterprise begins with two exercises that must happen before any remediation effort can be meaningfully designed.

The first is inventory. Every AI system in use across the organization — not just those that feel obviously significant, but those embedded in HR platforms, procurement software, customer service workflows, and financial reporting tools — must be documented and classified against the Act's risk categories. This exercise routinely surfaces surprises. AI capabilities procured through third-party vendors, or that emerged from software updates without explicit AI procurement decisions, often go undocumented until a systematic review forces them into view. An AI system that a vendor describes as "intelligent automation" may qualify as a high-risk system under the Act's definitions; the label on the box is not the test.

The second exercise is a gap analysis against high-risk requirements. For each system that falls into the high-risk category, the organization must assess the state of its risk management documentation, its data governance practices, its technical documentation, its human oversight mechanisms, and its conformity assessment readiness. The gap analysis will almost certainly reveal that documentation is the primary deficiency. Most organizations have governance intentions but not the written, auditable artifacts that the Act requires.

The path from gap analysis to compliance involves building those artifacts — risk management records, data governance protocols, technical documentation packages, conformity assessment procedures — and then establishing the operational discipline to maintain them continuously. This is not a project with a defined endpoint. The Act's requirements are ongoing; conformity must be demonstrated not at a single moment but across the operational life of each high-risk system.

For systems that require CE marking, engagement with EU-recognized notified bodies is necessary. Saudi organizations will need to either retain EU legal counsel with AI regulatory expertise or establish relationships with EU-based authorized representatives who can manage the interface with regulators on an ongoing basis. Neither arrangement is quick to establish, and the demand for qualified notified bodies is already high across European markets.

Vendor relationships also require attention. Many Saudi organizations have procured AI capabilities from third-party providers — global SaaS platforms, regional technology vendors, cloud providers offering AI APIs as a service. The Act creates obligations that extend through supply chains: if a vendor provides an AI system that falls into the high-risk category and is used in EU-affecting operations, the Saudi deployer carries compliance responsibilities regardless of where the AI was built. Vendor contracts should be reviewed to establish where responsibility for conformity assessment lies, and procurement processes going forward should include EU AI Act compliance as a standard evaluation criterion alongside security and data processing terms.

The Regulatory Horizon

The EU AI Act is not the end of this process; it is the beginning. Brussels has historically set the global floor for technology regulation, and the GDPR's emergence as a de facto global standard — adopted in whole or in significant part by regulators from Japan to Brazil to India — is likely to repeat with AI. Saudi Arabia's own AI governance framework will continue to evolve, and the direction of travel points toward closer alignment with international standards rather than divergence from them.

Organizations that build genuine EU AI Act compliance capacity are therefore not simply solving a jurisdictional problem. They are investing in an institutional competency — systematic AI governance, rigorous documentation, continuous risk management — that will be relevant regardless of which regulator acts next. The enterprises that will be best positioned to compete in international markets over the coming decade are those that treat regulatory compliance not as a burden to be managed but as an architectural commitment embedded in how they build, deploy, and oversee AI systems.

The credit-scoring company in the opening scenario, faced with the question of whether it is in compliance with the EU Act, has a difficult few months ahead. But it also has an opportunity. The work of answering that question — building the documentation, establishing the processes, reviewing the data governance — will produce a stronger institution when it is done. The deadline is August 2026. That is not as far away as it sounds.


Published by PeopleSafetyLab — AI safety and governance research for KSA organizations.

N

Nora Al-Rashidi

AI governance researcher specialising in regulatory compliance for organisations in Saudi Arabia and the GCC. Examines how SDAIA, SAMA, and the NCA's overlapping frameworks interact — what that means for risk, audit, and board-level accountability.

Share this article: