Skip to main content
Lab Notes
AI Governance

AI Vendor Management: Contracts, SLAs, and Compliance for Saudi Enterprises

Nora Al-Rashidi|March 6, 2026|10 min read

In the summer of 2025, a Saudi healthcare organization discovered that the AI vendor processing its patient records for clinical risk scoring had been routing that data through a subcontractor whose infrastructure sat outside the Kingdom. No one had asked about subcontractors during procurement. The original contract said nothing about them. The regulatory exposure under the Personal Data Protection Law was immediate and significant, and the options available to the organization at that point were all expensive.

What made the situation unusual was not that it happened, but that the problem was discovered before a regulator found it first.

The AI vendor relationship in Saudi Arabia is a governance problem that consistently gets misclassified as a procurement problem. Organizations negotiate prices, evaluate technical capabilities, assess integration complexity, and do not, in many cases, address the governance obligations that the relationship creates until an incident forces the issue. By then, the contract has been signed, the integration has been built, the data has been flowing — and the leverage that existed during negotiation has disappeared.

The regulatory architecture governing AI vendors in KSA does not allow organizations to delegate accountability to the vendor. Under the Personal Data Protection Law, ultimate responsibility for personal data belongs to the organization that collects it, not the vendor that processes it on their behalf. SDAIA's AI Ethics Guidelines extend that accountability to encompass the ethical behavior of AI systems throughout their deployment lifecycle. The NCA's Third-Party Risk Management requirements treat vendors with access to sensitive systems as potential vectors of cybersecurity exposure. SAMA adds its own outsourcing governance requirements on top of these for financial institutions. A vendor who is technically competent but contractually unbound to these obligations is not a managed vendor relationship — it is an unmanaged liability.

The Due Diligence That Changes the Conversation

Structured due diligence before a contract is signed is the governance function that most vendor relationships skip or compress. It is also the function that does the most to determine what kind of relationship you end up in.

Regulatory compliance verification is the first dimension. The questions that matter here are not answered by a vendor's marketing materials or a self-attested compliance checkbox. Where is data stored and processed, and does it stay within KSA borders or cross to another jurisdiction under PDPL's cross-border transfer provisions? What documentation exists demonstrating alignment with SDAIA's AI Ethics and Operational Guidelines — and can you see it before signing? For vendors serving critical infrastructure, what is their demonstrated position relative to NCA certification requirements? For financial institutions, does the vendor meet SAMA's outsourcing guidelines, and can that be contractually established?

Technical and operational assessment is the second dimension. Any vendor deploying AI in a regulated context should be able to produce documentation of how their models are developed, what data trained them, how performance is validated, and what the process is for detecting and responding to model drift. For regulated sector applications, explainability and interpretability reports should exist and be producible on request. The absence of this documentation is not a technical limitation — it is a governance gap, and it is a preview of what happens when a regulator asks the same questions later.

Business viability and risk concentration is the third dimension, and the one organizations most consistently neglect. AI vendor relationships create dependencies that become liabilities when a vendor exits the market, changes ownership, or loses a key technical team. Understanding a vendor's financial health before a long-term commitment is obvious in principle and often skipped in practice. Understanding whether a vendor's ownership structure creates export-control considerations or foreign-influence concerns in KSA is more specialized but increasingly relevant as the AI supply chain becomes geographically complex. Understanding the migration path — what it would take to move to an alternative provider or an in-house solution — is a question that belongs in due diligence, not in an emergency.

What Contracts Must Accomplish

The contract is the instrument that converts the vendor relationship from a voluntary arrangement into an enforceable set of obligations. Its purpose is not primarily to manage the ideal scenario, where the vendor performs well and nothing goes wrong. Its purpose is to determine what happens when something does go wrong, and who is accountable for it.

The most consequential obligation is the regulatory compliance representation: the vendor must warrant, in writing, that the services they provide comply with KSA law, including PDPL, SDAIA guidelines, NCA cybersecurity regulations, and any sector-specific frameworks that apply. This transfers liability to the vendor and creates a proactive notification obligation — the vendor must inform the organization promptly if their compliance status changes in ways that affect the services. Without this, a vendor's verbal assurances carry no legal weight.

Data residency obligations must be contractually specific rather than aspirationally stated. A vendor's general assurance of PDPL compliance is not the same as a contractual commitment to store and process data within KSA borders — and is not the same as the documentation an organization needs to produce when SDAIA or NCA asks. The contract should specify where data is stored, what safeguards exist for any cross-border transfer, and what documentation the vendor will maintain and make available to support regulatory review. Subcontractors who touch that data must be subject to the same requirements, and the contract must require the vendor to disclose and obtain consent before introducing any new subcontractors into the data pipeline.

Model transparency obligations are where many organizations find the sharpest gap between vendor claims and vendor capacity. A contract requiring the vendor to provide model cards documenting architecture, training data sources, intended use cases, limitations, and known biases — and specifying a timeline for updating that documentation when models change materially — creates a documentation pipeline that organizations need both for internal governance and for regulatory defensibility. A vendor who resists this provision is a vendor who does not have this documentation, and who cannot be held accountable for what their model actually does.

Incident notification provisions determine how quickly an organization can respond to a problem before it escalates into a regulatory obligation. A 24-hour notification window for security breaches, model failures, significant accuracy degradations, and potential bias incidents aligns with NCA and PDPL reporting timelines. Without a contractual notification obligation, organizations discover incidents when they notice the effects — which is too late to prevent the regulatory exposure, only early enough to begin managing it.

Audit rights, if not established contractually with specificity, typically dissolve when organizations try to exercise them. The right to audit "as the parties may agree" gives a resistant vendor unlimited scope to negotiate the audit into irrelevance. Audit rights need to specify what can be examined — data processing practices, model governance documentation, incident logs, employee training records — and the process under which those audits can be conducted. The degree to which a vendor resists specific audit provisions is a signal about how they intend to handle compliance scrutiny when it arrives from a regulator rather than a client.

Service-level commitments for AI systems differ fundamentally from the uptime and availability metrics that govern traditional IT services. Model accuracy, measured against independently validated test data rather than only the vendor's own benchmarks, must be contractually defined. For classification systems — fraud detection, credit scoring, medical triage — precision, recall, and F1 score each capture different failure modes, and the acceptable thresholds for each depend on the specific consequences of false positives versus false negatives. For high-stakes applications in regulated sectors, fairness metrics — demographic parity, disparate impact ratios — belong in the SLA as quantifiable commitments, not advisory guidance.

Exit terms are the provisions that organizations most consistently underinvest in, and then most urgently need when the relationship ends. A vendor who controls your training data, model weights, and system integrations has leverage over an exit that cannot be recovered once it is in place. The contract should specify, before it is signed, that all customer data will be returned in machine-readable format on termination, that trained model weights or equivalent artifacts will be provided subject to intellectual property arrangements, and that the vendor will provide transition support for a specified period. The leverage to negotiate these terms exists at contract signature. It does not exist when you are trying to migrate away from a vendor under deadline.

Liability provisions are the financial mechanism that makes the other obligations real. A liability cap on regulatory violations — including PDPL breaches and SDAIA non-compliance — is a vendor signaling how seriously they intend to take those obligations. Uncapped liability for regulatory violations is the standard that makes compliance obligations consequential rather than advisory.

The Ongoing Governance Function

A signed contract is the beginning of a governance function, not the resolution of one. The vendor relationship must be actively maintained, assessed, and updated across the deployment lifecycle.

Quarterly reviews should verify that vendor processes remain current with regulatory updates from SDAIA, NCA, and SAMA; that SLA performance has stayed within contracted thresholds; that model documentation is current; and that the vendor has been responsive to any incidents or anomalies during the period. Each review should produce a written record, both for internal governance and as documentation in the event of a regulatory inquiry. Annual assessments involving an independent third party carry more weight with regulators than internal reviews alone and are increasingly expected as standard practice in regulated sectors.

Automated monitoring — dashboards for SLA performance, data residency verification, model drift detection, and security events — should operate between formal reviews, surfacing anomalies for investigation before they become incidents. The objective is continuity: governance that functions on the days between formal reviews, not only during the reviews themselves.

Certain patterns in a vendor relationship warrant immediate escalation regardless of the review schedule. Repeated SLA failures within a short period indicate a systemic problem that formal reviews are unlikely to resolve without pressure. Delays in delivering model cards and bias reports typically mean the documentation does not exist. Resistance to audits or opacity about subcontractors introduces regulatory risk that cannot be managed without visibility. Slow or indifferent responses to regulatory changes from SDAIA or NCA preview how the vendor will perform when compliance pressure arrives from a regulator rather than a client.

Sector Dimensions

The baseline framework applies across industries, but each KSA sector carries requirements that shape how contracts should be calibrated.

In financial services, SAMA's outsourcing risk framework and model risk management guidelines impose specific obligations that vendor contracts must reflect. Comprehensive audit trails are required for all AI-driven decisions affecting customers — this is a compliance expectation for SAMA-regulated entities, not a recommendation. Healthcare introduces the possibility that AI systems informing clinical decisions may meet the threshold for medical device classification under SFDA regulations, which triggers clinical validation and post-market monitoring obligations that vendors must be contractually prepared to support. For energy and critical infrastructure, NCA's Critical Infrastructure Cybersecurity Regulations establish the baseline, and AI systems interfacing with operational technology require specific contractual treatment of those integration points. Government and smart-cities deployments involve questions of national security review, data classification alignment with NCA standards, and heightened explainability requirements for citizen-facing services.

The Saudi healthcare organization that discovered its vendor's subcontractor problem before the regulator did was fortunate in the most limited sense: it had the option to address the problem rather than respond to an enforcement action. What it had also demonstrated, inadvertently, was that vendor management as a continuous governance function — one that monitors subcontractors, reviews data flows, and catches compliance gaps before they become regulatory events — is not procedural overhead. It is the difference between the organization that manages its AI supply chain and the organization that explains it to SDAIA.

Published by PeopleSafetyLab — AI safety and governance research for KSA organizations.

N

Nora Al-Rashidi

AI governance researcher specialising in regulatory compliance for organisations in Saudi Arabia and the GCC. Examines how SDAIA, SAMA, and the NCA's overlapping frameworks interact — what that means for risk, audit, and board-level accountability.

Share this article: