Skip to main content
Saudi Arabia's Draft Responsible AI Policy Closes Consultation on 3 May, and It Looks Nothing Like the EU AI Act
· 7 min read

Saudi Arabia's Draft Responsible AI Policy Closes Consultation on 3 May, and It Looks Nothing Like the EU AI Act

SDAIA's draft Responsible AI Policy imposes universal design principles on every AI developer in Saudi Arabia.

Saudi Arabia's Draft Responsible AI Policy Closes Consultation on 3 May, and It Looks Nothing Like the EU AI Act

Saudi Arabia is about to finalise the most ambitious AI regulatory framework in the Arab world, and it is doing so on its own terms rather than by copying Brussels. The Saudi Data and AI Authority (SDAIA) has published a draft Responsible AI Policy for public consultation, with submissions closing on 3 May 2026, just days from now.

The document is the most comprehensive AI governance effort the kingdom has produced, and it sits at the centre of the Cabinet-endorsed Year of AI designation. It also represents a deliberate architectural choice: rather than regulate by risk category like the EU AI Act, Riyadh is regulating by design principles applied universally across government, private sector, non-profit, and individual AI development.

The draft policy establishes seven foundational ethics principles. Integrity and fairness. Privacy and security. Humanity.

Plus social and environmental considerations.

It imposes specific technical obligations: embedded watermarks in all AI outputs, content-tracking mechanisms for provenance, bias mitigation via data-source diversification, interpretable model features, and privacy, transparency, and safety built into design rather than bolted on afterwards. That architecture shifts the compliance burden from after-the-fact assessment to pre-deployment design, and that matters enormously for every company building AI for Saudi consumers.

What The Seven Principles Actually Require

Read closely, the SDAIA document is prescriptive where the EU AI Act is classificatory. Where Brussels divides AI systems into unacceptable, high, limited, and minimal risk tiers and assigns obligations accordingly, Riyadh tells every AI developer, regardless of use case or scale, to embed the same foundational design features.

Watermarking applies to all AI outputs, not just deepfakes. Bias mitigation applies to all training pipelines, not just high-risk applications. Interpretability applies to all models, not just those used in justice or employment. Content tracking must be integrated from the start, not bolted on when abuse is discovered.

For Saudi-based AI developers, this lowers ambiguity. For international vendors, it raises the floor. A product that ships in the EU under a limited-risk designation may still need substantial engineering work before it complies with SDAIA's baseline for deployment inside the kingdom.

Mid-article image

The PDPL Is Already Doing The Enforcement

SDAIA's regulatory footprint is not theoretical. The Personal Data Protection Law (PDPL), administered by the authority, took effect on 14 September 2024 and has been actively enforced ever since. SDAIA has issued 48 violation decisions across 2024 and 2025, demonstrating that this is operational regulation with real penalties, not aspirational guidance. The same enforcement apparatus will be applied to the Responsible AI Policy when it is finalised after 3 May consultation.

That distinguishes Saudi's approach from Qatar's, whose National AI Ethics Code is the region's first genuinely binding policy but applies primarily to the public sector. SDAIA's framework applies to every AI user and developer inside Saudi borders, including private companies, non-profits, and individuals. The reach is wider, and the enforcement infrastructure is already running.

By The Numbers

  • 3 May 2026: deadline for public consultation submissions on SDAIA's draft Responsible AI Policy
  • 7: foundational AI ethics principles established by the draft policy
  • 14 September 2024: date Saudi Arabia's Personal Data Protection Law took effect under SDAIA administration
  • 48: violation decisions SDAIA issued across 2024 and 2025, demonstrating operational enforcement capacity
  • All AI outputs: scope of the watermarking requirement imposed by the draft policy

The draft Responsible AI Policy is the most prescriptive national AI framework in MENA. It shifts compliance from risk-tier classification to universal design principles.

Gulf AI regulatory adviser, briefing on the SDAIA consultation

Every AI developer building for Saudi consumers needs to treat the 3 May deadline as a product roadmap inflection point, not a legal department filing.

International AI counsel, Middle East practice

How The Policy Compares Across The Gulf

The GCC now has four distinct AI governance approaches in various stages of maturity. Saudi Arabia is pursuing universal design principles, enforced through SDAIA's existing PDPL apparatus. Qatar has the region's first binding national ethics code, focused on public-sector deployment. UAE is building through sectoral rules from the Cognitive Computing Council, aligned with the UAE AI Strategy, and the country's push to become the "world's first AI-native government" by 2027.

Bahrain is pursuing portal-led centralised coordination through its National AI Portal.

CountryRegulatory AnchorScopeEnforcement Stage
Saudi ArabiaSDAIA Responsible AI Policy (draft)All sectors, all developersConsultation closes 3 May 2026
QatarNational AI Ethics CodePublic sector primaryBinding
UAESectoral (CCC, AI-native gov)Vertical-specificActive
BahrainNational AI PortalCentralised coordinationEarly stage

What International AI Vendors Should Do Before 3 May

For global AI companies selling into Saudi, the immediate to-do list is concrete. Submit consultation input where product functionality maps to specific draft-policy obligations. Review watermarking implementations, because the SDAIA requirement is stricter than most current default configurations. Map bias-mitigation pipelines against the data-source diversification requirement.

And most importantly, align product roadmaps to ensure interpretability features ship by the time the policy is finalised and enforced, likely in the second half of 2026, with guidance expected via SDAIA's public channels.

For Saudi enterprises deploying third-party AI, the guidance is different: audit vendor compliance against the seven principles before renewing contracts, and hold vendors accountable for shipping the watermarking and content-tracking features the policy will require.

The Geopolitical Read

Saudi's regulatory move is also a positioning play. Riyadh has declined to join the US-led Pax Silica semiconductor coalition that Qatar and the UAE signed into, and it is building regulatory independence as the kingdom's Year of AI arrives. That independence matters because it lets Saudi negotiate model access, compute partnerships, and data-sharing agreements from a regulatory baseline it controls rather than one imported from Washington or Brussels.

The EU-Morocco digital dialogue shows what a Brussels-aligned pathway looks like for a MENA state. Saudi is deliberately not taking that path.

The AI in Arabia View: The SDAIA Responsible AI Policy is the most important Arab AI governance document in a decade, and the 3 May consultation deadline is a genuine inflection point for every AI vendor with Saudi exposure. Riyadh has chosen a different architecture from Brussels: universal design principles, not risk-tier classification, with the existing PDPL enforcement apparatus already in place. That is sharper, simpler, and harder to ignore. Expect SDAIA to finalise the policy in mid-2026 and start issuing guidance notes by late 2026. Vendors that ship watermarking, interpretability, and bias mitigation as baseline features will have a market advantage. Those that do not will be slowly excluded.
AI Terms in This Article 5 terms
responsible AI

Developing and deploying AI with consideration for ethics, fairness, and safety.

AI governance

The policies, standards, and oversight structures for managing AI systems.

regulatory framework

A set of rules and guidelines governing how something can be used.

bias

When an AI system produces unfair or skewed results, often reflecting prejudices in training data.

compute

The processing power needed to train and run AI models.

Frequently Asked Questions

What is SDAIA's Responsible AI Policy?
It is a draft regulatory framework established by the Saudi Data and AI Authority, setting seven foundational ethics principles plus specific technical obligations for watermarking, content tracking, bias mitigation, interpretability, and privacy-by-design. Consultation closes on 3 May 2026.
How does it differ from the EU AI Act?
The EU AI Act classifies AI systems into risk tiers and assigns obligations accordingly. SDAIA's policy applies the same universal design principles to all AI systems regardless of use case. The Saudi approach is prescriptive rather than classificatory, which many engineers find clearer in practice.
Is SDAIA's enforcement capability real?
Yes. SDAIA administers the Personal Data Protection Law, which took effect on 14 September 2024, and has issued 48 violation decisions in 2024 and 2025. The same infrastructure will enforce the Responsible AI Policy once finalised.
Who needs to comply?
Every AI developer and deployer operating inside Saudi borders, including government bodies, private sector companies, non-profits, and individuals publishing AI systems. The reach is wider than Qatar's public-sector-focused ethics code.