Saudi Arabia's Draft Responsible AI Policy Closes Consultation on 3 May, and It Looks Nothing Like the EU AI Act
Saudi Arabia is about to finalise the most ambitious AI regulatory framework in the Arab world, and it is doing so on its own terms rather than by copying Brussels. The Saudi Data and AI Authority (SDAIA) has published a draft Responsible AI Policy for public consultation, with submissions closing on 3 May 2026, just days from now.
The document is the most comprehensive AI governance effort the kingdom has produced, and it sits at the centre of the Cabinet-endorsed Year of AI designation. It also represents a deliberate architectural choice: rather than regulate by risk category like the EU AI Act, Riyadh is regulating by design principles applied universally across government, private sector, non-profit, and individual AI development.
The draft policy establishes seven foundational ethics principles. Integrity and fairness. Privacy and security. Humanity.
Plus social and environmental considerations.
It imposes specific technical obligations: embedded watermarks in all AI outputs, content-tracking mechanisms for provenance, bias mitigation via data-source diversification, interpretable model features, and privacy, transparency, and safety built into design rather than bolted on afterwards. That architecture shifts the compliance burden from after-the-fact assessment to pre-deployment design, and that matters enormously for every company building AI for Saudi consumers.
What The Seven Principles Actually Require
Read closely, the SDAIA document is prescriptive where the EU AI Act is classificatory. Where Brussels divides AI systems into unacceptable, high, limited, and minimal risk tiers and assigns obligations accordingly, Riyadh tells every AI developer, regardless of use case or scale, to embed the same foundational design features.
Watermarking applies to all AI outputs, not just deepfakes. Bias mitigation applies to all training pipelines, not just high-risk applications. Interpretability applies to all models, not just those used in justice or employment. Content tracking must be integrated from the start, not bolted on when abuse is discovered.
For Saudi-based AI developers, this lowers ambiguity. For international vendors, it raises the floor. A product that ships in the EU under a limited-risk designation may still need substantial engineering work before it complies with SDAIA's baseline for deployment inside the kingdom.

The PDPL Is Already Doing The Enforcement
SDAIA's regulatory footprint is not theoretical. The Personal Data Protection Law (PDPL), administered by the authority, took effect on 14 September 2024 and has been actively enforced ever since. SDAIA has issued 48 violation decisions across 2024 and 2025, demonstrating that this is operational regulation with real penalties, not aspirational guidance. The same enforcement apparatus will be applied to the Responsible AI Policy when it is finalised after 3 May consultation.
That distinguishes Saudi's approach from Qatar's, whose National AI Ethics Code is the region's first genuinely binding policy but applies primarily to the public sector. SDAIA's framework applies to every AI user and developer inside Saudi borders, including private companies, non-profits, and individuals. The reach is wider, and the enforcement infrastructure is already running.
By The Numbers
- 3 May 2026: deadline for public consultation submissions on SDAIA's draft Responsible AI Policy
- 7: foundational AI ethics principles established by the draft policy
- 14 September 2024: date Saudi Arabia's Personal Data Protection Law took effect under SDAIA administration
- 48: violation decisions SDAIA issued across 2024 and 2025, demonstrating operational enforcement capacity
- All AI outputs: scope of the watermarking requirement imposed by the draft policy
The draft Responsible AI Policy is the most prescriptive national AI framework in MENA. It shifts compliance from risk-tier classification to universal design principles.
Every AI developer building for Saudi consumers needs to treat the 3 May deadline as a product roadmap inflection point, not a legal department filing.
How The Policy Compares Across The Gulf
The GCC now has four distinct AI governance approaches in various stages of maturity. Saudi Arabia is pursuing universal design principles, enforced through SDAIA's existing PDPL apparatus. Qatar has the region's first binding national ethics code, focused on public-sector deployment. UAE is building through sectoral rules from the Cognitive Computing Council, aligned with the UAE AI Strategy, and the country's push to become the "world's first AI-native government" by 2027.
Bahrain is pursuing portal-led centralised coordination through its National AI Portal.
| Country | Regulatory Anchor | Scope | Enforcement Stage |
|---|---|---|---|
| Saudi Arabia | SDAIA Responsible AI Policy (draft) | All sectors, all developers | Consultation closes 3 May 2026 |
| Qatar | National AI Ethics Code | Public sector primary | Binding |
| UAE | Sectoral (CCC, AI-native gov) | Vertical-specific | Active |
| Bahrain | National AI Portal | Centralised coordination | Early stage |
What International AI Vendors Should Do Before 3 May
For global AI companies selling into Saudi, the immediate to-do list is concrete. Submit consultation input where product functionality maps to specific draft-policy obligations. Review watermarking implementations, because the SDAIA requirement is stricter than most current default configurations. Map bias-mitigation pipelines against the data-source diversification requirement.
And most importantly, align product roadmaps to ensure interpretability features ship by the time the policy is finalised and enforced, likely in the second half of 2026, with guidance expected via SDAIA's public channels.
For Saudi enterprises deploying third-party AI, the guidance is different: audit vendor compliance against the seven principles before renewing contracts, and hold vendors accountable for shipping the watermarking and content-tracking features the policy will require.
The Geopolitical Read
Saudi's regulatory move is also a positioning play. Riyadh has declined to join the US-led Pax Silica semiconductor coalition that Qatar and the UAE signed into, and it is building regulatory independence as the kingdom's Year of AI arrives. That independence matters because it lets Saudi negotiate model access, compute partnerships, and data-sharing agreements from a regulatory baseline it controls rather than one imported from Washington or Brussels.
The EU-Morocco digital dialogue shows what a Brussels-aligned pathway looks like for a MENA state. Saudi is deliberately not taking that path.