## Who is leading, quietly
**Emirates NBD**, **First Abu Dhabi Bank**, **Mashreq**, **Al Rajhi Bank**, and **Saudi National Bank** are all running active AI fraud programmes. Most pair in-house data science teams with vendor stacks from **Mastercard Brighterion**, **Visa Protect**, **SAS**, **Feedzai**, and **FICO**, layered on top of local startups that specialise in Arabic-language document forensics. Central banks are watching, sometimes co-sponsoring, and in the UAE case actively setting expectations through the new [CBUAE AI guidance for financial institutions](/policy/cbuae-ai-guidance-financial-institutions-2026). SAMA is taking a lighter touch so far, though its cyber-risk handbook is being updated to reflect agentic AI exposure.
> "Dubai and Riyadh are piloting AI applications in payments and compliance, though workforce concerns persist."
> — Findings from GSMA MENA Payments Outlook, April 2026
> "2026's cybersecurity threat landscape is high-tech, high-stakes, and fast-changing. From AI-driven hacks to deepfake scams eroding trust in communications, financial institutions must assume the attacker has the same tools they do."
> — Crowdfund Insider, 2026 Cybersecurity Predictions
## What deepfake defence looks like in practice
At the onboarding layer, banks in Dubai are experimenting with liveness checks that defeat voice cloning and face swaps by demanding low-latency Arabic speech prompts that change every few seconds. During authentication, behavioural biometrics models track typing cadence, scroll patterns, and handset tilt to spot AI-generated remote-access sessions. On the payment rail, large outbound transfers are checked against an internal knowledge graph that flags unusual counterparties, and AI-generated business emails are cross-checked against previous genuine correspondence. Fintechs such as **Tabby**, **Tamara**, and **Mamo** are deploying similar tooling on consumer flows, often earlier than the banks.
| Risk vector | Where AI helps | Where it can make things worse |
|---|---|---|
| Card fraud | Real-time scoring, cross-issuer signal sharing | False declines, customer friction |
| AML and sanctions | Unified AI orchestration, Arabic entity matching | Model drift, opaque SAR decisions |
| Deepfake onboarding | Dynamic liveness, behavioural biometrics | Exclusion of legitimate customers |
| Synthetic identity | Graph analytics across banks | Data-sharing and privacy trade-offs |
| Business email compromise | Counterparty graph, style models | Alert fatigue for corporate clients |
The AI in Arabia View: Gulf payments are about to split into two groups. One will treat AI as a set of narrow anti-fraud tools bolted on top of legacy rule engines, and will pay for that choice every quarter in fraud losses and regulator attention. The other will rebuild its payment stack around AI as the default decision layer, with explainability and human oversight built in from day one. Dubai and Riyadh have the regulatory backbone, the talent pool, and the sovereign AI ambition to produce the second kind of bank. The IMF's late April report will be most useful if it names which is which.
## Frequently Asked Questions
### Why are Dubai and Riyadh leading Gulf AI fraud defence?
They combine three ingredients that most regions lack together: deep card and digital payment volumes, active central-bank AI guidance, and a concentrated pool of AI engineering talent. That lets both hubs run production AI fraud pilots across retail and corporate flows at a speed other emerging markets cannot match.
### Is deepfake fraud really a live risk for Gulf banks?
Yes. Voice cloning against relationship managers, face-swapped onboarding attempts at retail banks, and AI-generated invoices inside corporate payments are all being observed in live traffic. Banks are reporting measurable upticks, and that is a core reason onboarding liveness checks are being rebuilt around AI-specific attacks.
### How does the CBUAE guidance change what banks must do?
CBUAE's AI guidance requires board-level AI risk ownership, documented model risk management, explainability for consumer-facing decisions, human oversight for high-stakes automation, and clear logging of AI-assisted actions. Banks that meet it will be best placed to roll AI fraud tools deeper into production.
### What should corporate customers do now?
Corporate customers should expect stronger challenge flows on large transfers, lower tolerance for unverified counterparties, and more AI-generated communications caught by the bank. Treasury teams should tighten vendor onboarding, rotate payment authorisations, and keep real, verifiable human channels open for exception handling.
Is the Gulf about to set the global AI fraud-defence standard, or simply reinvent rule engines with prettier dashboards? Drop your take in the comments below.