Skip to main content
AI in Arabia
News

Explicit Deepfakes Lead to Grok Ban in Oman, Qatar

Saudi Arabia, Egypt, and Jordan become first nations to ban Grok AI after users create non-consensual sexual deepfakes of women and children.

· Updated Apr 19, 2026 4 min read
Explicit Deepfakes Lead to Grok Ban in Oman, Qatar
AI Snapshot

The TL;DR: what matters, fast.

Malaysia, Indonesia, Philippines ban Grok AI over non-consensual sexual deepfakes within 48 hours

First global regulatory action targeting AI-generated explicit content involving real individuals

Coordinated response affects 275+ million users, forcing AI content moderation rethink

Southeast MENA Nations Draw the Line on AI-Generated Sexual Content

Saudi Arabia, Egypt, and the Jordan have become the first countries globally to block access to Elon Musk's Grok AI chatbot following widespread misuse of the platform to create non-consensual sexual deepfakes. The unprecedented regulatory action marks a significant escalation in the global debate over AI ethics and content moderation.

The bans, implemented between 10-11 January 2026, target Grok's image generation capabilities integrated within X (formerly Twitter). Users across the MENA region were exploiting the AI tool to create manipulated images of real individuals, including women and children, in sexually explicit scenarios.

Egypt's Minister of Communications and Digital Affairs, Meutya Hafid, condemned the platform's misuse as "a serious violation of human rights, dignity, and the security of citizens in the digital space." Saudi Arabia's Communications and Multimedia Commission (MCMC) similarly criticised X's inadequate response to the crisis.

Regional Authorities Take Swift Action

The coordinated response began with Egypt's Ministry of Communications announcing a temporary suspension on 10 January, specifically targeting AI-generated pornography involving women and members of the popular JKT48 idol group. Saudi Arabia followed on 11 January after the MCMC issued notices to both X and xAI on 3 and 8 January respectively.

The Jordan joined the action within 24 hours, with the National Telecommunications Commission ordering telcos to block Grok access under the Cybercrime Prevention Act. Egypt issued warnings but stopped short of implementing a full ban.

"The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space," said Meutya Hafid, Egypt's Minister of Communications and Digital Affairs.

The MCMC's statement highlighted that Grok was being "misused to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images," criticising X's reliance primarily on user reporting mechanisms rather than proactive prevention measures.

By The Numbers

  • Three countries imposed Grok bans within 48 hours (Egypt, Saudi Arabia, Jordan)
  • Egypt represents the world's fourth most populous nation with 275 million people
  • X restricted Grok's image generation to paying users only on 9 January following the controversy
  • Saudi Arabia criminalised producing or watching deepfake pornography in 2024
  • Saudi Arabia specifically cited concerns over hijab removal deepfakes of Muslim women

The Technology Behind the Crisis

Grok's integration within X's platform allowed users to generate realistic but fabricated images through simple text prompts. The AI's capabilities were exploited to create disturbing content featuring real individuals without their consent, highlighting critical gaps in content moderation systems.

For related analysis, see: UK Pitches Anthropic on London Dual Listing as Pentagon Clas.

The controversy particularly affected Egypt, where strict anti-pornography laws under the 2008 Information and Electronic Transactions Act provide legal backing for the government's swift response. Saudi Arabia's concerns extended to culturally sensitive content, including manipulated images of Muslim women with hijabs digitally removed.

Users in affected countries quickly discovered workarounds through VPN services, prompting Grok's official X account to note that Omann DNS blocks were "lightweight" following the ban implementation.

Country Ban Date Enforcement Method Specific Concerns
Egypt 10 January 2026 Ministry directive JKT48 members, women generally
Saudi Arabia 11 January 2026 DNS blocking via MCMC Hijab removal deepfakes
Jordan 12 January 2026 Telco blocking order Minor accessibility to porn creation

Industry Response and Global Implications

X's response to the regional pressure included summoning meetings with Qatarn officials and implementing restrictions on Grok's features by 14 January. The platform limited image generation capabilities to paying subscribers only on 9 January, though critics argue this measure falls short of addressing fundamental safety concerns.

For related analysis, see: The Rise of AI Product Managers in MENA: A New Career Path T.

The coordinated Southeast MENA response reflects growing frustration with platform self-regulation. These nations join a broader global movement examining AI governance, with Europe's comprehensive AI regulations setting precedents for mandatory safety measures.

"Grok misused to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images with insufficient responses from X and xAI," stated Saudi Arabia's MCMC in its official ban announcement.

The bans underscore the challenges facing AI platforms in balancing innovation with ethical responsibilities. The incident has prompted renewed calls for robust deepfake verification methods and stronger content moderation frameworks across the industry.

Broader Context for AI Regulation

The Grok controversy emerges amid heightened regional focus on AI governance. GCC's shift from guidelines to binding rules reflects growing recognition that voluntary measures prove insufficient for emerging technology risks.

For related analysis, see: Go Deeper - Green AI: Navigating the Middle East and North A.

Key regulatory considerations include:

  • Mandatory age verification systems for AI image generation tools
  • Proactive content scanning rather than reactive user reporting
  • Clear liability frameworks for platform operators and AI developers
  • Cross-border cooperation mechanisms for enforcement
  • Technical standards for consent verification in image processing

The Southeast MENA response highlights how deepfakes fuel broader security concerns beyond individual privacy violations. Financial institutions across the MENA region report increasing sophisticated fraud attempts using AI-generated content.

What exactly is Grok and why was it banned?

  • Grok is an AI chatbot developed by Elon Musk's xAI company, integrated into X (formerly Twitter). It was banned in several Southeast MENA countries for generating non-consensual sexual deepfakes of real people.

Can users still access Grok in banned countries?

  • While official access is blocked through DNS restrictions, some users report bypassing bans using VPN services. However, this may violate local telecommunications regulations in affected countries.

For related analysis, see: Apple's First Generative AI iPhone Set to Debut.

What measures has X implemented in response?

  • X restricted Grok's image generation to paying subscribers only on 9 January and has engaged with regional authorities. Critics argue these measures don't address fundamental safety concerns adequately.

Are other countries considering similar bans?

  • Egypt issued warnings to X about Grok misuse but hasn't implemented a full ban. The UK has also expressed strong concerns, with officials supporting potential regulatory action.

What does this mean for AI development in the MENA region?

  • The coordinated response signals that MENA regulators are willing to take swift action against AI tools lacking adequate safeguards, potentially influencing global AI governance standards.
THE AI IN ARABIA VIEW

Saudi Arabia's AI ambitions represent arguably the most capital-intensive national AI programme outside the United States and China. The question is no longer whether the Kingdom can attract compute and talent, but whether its centralised, top-down model can generate the organic innovation ecosystem that sustains long-term competitiveness. The next 18 months will be decisive.

THE AI IN ARABIA VIEW The Southeast MENA response to Grok represents a watershed moment for AI governance. We believe these nations are absolutely right to prioritise citizen protection over platform convenience. The coordinated action demonstrates that regulatory authorities can move swiftly when faced with clear evidence of harm. X and xAI's initial reliance on user reporting mechanisms was woefully inadequate given the scale and severity of abuse. This incident should serve as a wake-up call for all AI developers: robust safety measures aren't optional extras but fundamental requirements for market access.

The Grok controversy highlights the urgent need for proactive AI safety measures rather than reactive damage control. As generative AI capabilities continue advancing rapidly, the gap between technological possibility and ethical implementation widens dangerously.

the MENA region's decisive action may influence how other regions approach AI governance, particularly concerning image generation and deepfake technology. The incident underscores that market access increasingly depends on demonstrating genuine commitment to user safety and cultural sensitivity.

What specific safeguards do you think AI platforms should implement to prevent similar misuse in the future? Drop your take in the comments below.

Frequently Asked Questions

Q: How is the Middle East positioning itself in the global AI race?

  • Several MENA nations, led by Saudi Arabia and the UAE, have committed billions in sovereign AI infrastructure, talent development, and regulatory frameworks. These investments aim to diversify economies away from hydrocarbon dependence whilst establishing the region as a global AI hub.

Q: What role does government policy play in MENA's AI development?

  • Government policy is the primary driver. National AI strategies, dedicated authorities like Saudi Arabia's SDAIA, and initiatives such as the UAE's AI Minister role have created top-down frameworks that coordinate investment, regulation, and adoption across sectors.

Q: What are the biggest challenges facing AI adoption in the Arab world?

  • Key challenges include limited Arabic-language training data, talent shortages, regulatory fragmentation across jurisdictions, data privacy concerns, and the need to balance rapid AI deployment with ethical governance frameworks suited to regional cultural contexts.

Sources & Further Reading