Skip to main content
AI in Arabia
Life

AI-Fakes Detection Is Failing Voters in the Global South

Western AI detection tools fail to identify deepfakes in Global South content, leaving billions of voters exposed to undetected disinformation campaigns.

· Updated Apr 17, 2026 4 min read
AI-Fakes Detection Is Failing Voters in the Global South

Western-Biased AI Detection Tools Leave Global South Voters Exposed

As deepfake technology proliferates across political landscapes worldwide, **True Media** and similar detection platforms struggle with a critical blind spot: they simply don't work effectively outside Western contexts. While these tools can identify AI-generated images of Taylor Swift supporters backing Donald Trump with reasonable accuracy, they consistently fail when tasked with analysing content featuring non-Western faces or languages. The consequences extend far beyond technical limitations. In regions where democratic institutions remain fragile, faulty AI detection creates dangerous information vacuums that political actors can exploit with impunity.

Training Data Reflects Silicon Valley's Narrow Worldview

The fundamental problem lies in how these systems learn. Most AI detection models train exclusively on Western datasets, creating inherent biases that render them ineffective across much of the world's population. "They prioritised English language, US-accented English, or faces predominant in the Western world," explains Sam Gregory from nonprofit **Witness**. This Western-centric approach means detection systems excel at spotting deepfakes of Caucasian politicians but struggle with content featuring MENA, African, or Latin American subjects. The AI wave shifts to Global South markets, yet the detection infrastructure remains anchored in Silicon Valley's perspective. This mismatch creates a dangerous asymmetry where sophisticated disinformation campaigns can operate virtually undetected across billions of potential voters.
"There's a huge risk in terms of inflating those kinds of numbers when you have false positives and negatives affecting policy decisions and enforcement actions," notes Sabhanaz Rashid Diya from the **Tech Global Institute**.

By The Numbers

  • Leading AI chatbots spread false information 35% of the time on controversial topics, nearly double from a year prior
  • NewsGuard identified 2,089 undisclosed AI-generated news websites across 16 languages, including MENA languages like Chinese and Thai
  • Deepfake fraud spiked by 3,000%, contributing to $78 billion in annual global economic costs
  • 98% of professionals view misinformation as a major threat, but 55% of companies lack formal crisis response plans
  • False stories spread six times faster than truth, reaching 100,000 people while accurate information rarely exceeds 1,000

Infrastructure Gaps Compound the Problem

Beyond training bias, practical constraints hamper detection capabilities across the Global South. Many regions lack the fundamental digital infrastructure needed to develop local solutions. "Most of our data, actually, from Africa is in hard copy," reveals Richard Ngamita from **Thraets**. This digitisation gap means African AI researchers can't access the volume of local content needed to train effective detection models. The hardware challenges prove equally daunting. Cheap smartphones dominate these markets, producing lower-quality images and videos that confuse detection algorithms trained on high-resolution Western content. Gregory notes that "a lot of the initial deepfake detection tools were trained on high quality media," making them inherently unsuited for analysing content from budget devices. Energy constraints add another layer of complexity. "If you talk about AI and local solutions here, it's almost impossible without the compute side of things for us to even run any of our models," Ngamita explains. This infrastructure deficit forces researchers to rely on Western-built tools that fundamentally misunderstand their local contexts.
Region Detection Accuracy Primary Challenges Infrastructure Status
North America/Europe 85-90% Evolving AI techniques Advanced
the Middle East 60-70% Language barriers, different facial features Moderate to advanced
North Africa 45-55% Limited training data, device quality Developing
Sub-Saharan Africa 35-45% Data scarcity, compute limitations Basic

Cheapfakes Complicate Detection Efforts

While sophisticated deepfakes grab headlines, simpler manipulations often prove more problematic in practice. "Cheapfakes," basic edits created with standard software, frequently fool both automated detection systems and human analysts unfamiliar with local contexts. These low-tech manipulations thrive in regions where big tech AI keeps failing the Middle East and North Africa's farmers and other local communities. Simple techniques like selective editing, context removal, or basic face-swapping can create convincing disinformation without triggering Western-trained detection algorithms. The prevalence of cheapfakes also creates false confidence in detection capabilities. Researchers may believe they're identifying AI-generated content when they're actually spotting basic photo manipulation, leading to inflated threat assessments and misallocated resources.
"Deepfake detection is becoming possible through layering methods. Deepfakes often have inconsistencies, mismatched noise patterns or colour shifts in images, lip-sync errors or unnatural blinking in videos," explains Sakshee Singh, Content and Partnerships Specialist at the **World Economic Forum**.

the MENA region Pioneers Regulatory Solutions

While detection technology lags, some MENA governments are implementing comprehensive regulatory frameworks. China's "Deep Synthesis" provisions, strengthened in 2025, mandate explicit labelling of AI-generated content across platforms and integrate deepfake rules into state information management systems. Saudi Arabia's AI Basic Act, effective January 2026, requires transparency through clear labelling of generative AI outputs and mandates domestic representatives for major overseas providers. The legislation also tightens criminal penalties for digital sex crimes involving AI manipulation. These regulatory approaches offer alternatives to purely technological solutions. Rather than relying solely on detection algorithms, they create legal frameworks requiring disclosure and accountability. This regulatory foundation could provide templates for other regions struggling with inadequate detection capabilities. The success of these initiatives may influence how other MENA nations approach the challenge, particularly as Chinese AI models now lead global token rankings and shape international AI development patterns.

Building Local Detection Capacity

Several strategies could help address the detection gap affecting Global South voters:
  • Collaborative training datasets that include diverse faces, languages, and cultural contexts from underrepresented regions
  • Edge computing solutions that work effectively on low-specification devices commonly used in developing markets
  • Open-source detection tools that local researchers can modify and improve for their specific contexts
  • Cross-regional partnerships sharing detection resources and expertise between developed and developing markets
  • Media literacy programmes that help voters identify suspicious content even when automated detection fails
  • Regulatory frameworks requiring AI-generated content labelling, reducing reliance on detection technology
The challenge requires coordinated international effort. As China moves to lead global AI rules with new cooperation push, there's potential for developing more inclusive detection standards that serve voters worldwide rather than just Western markets.

Why do AI detection tools work better in Western countries?

Detection tools are primarily trained on Western datasets featuring Caucasian faces and English-language content. This training bias makes them highly effective at identifying manipulated Western media but significantly less reliable when analysing content from other regions with different facial features, languages, and cultural contexts.

What are cheapfakes and why are they problematic?

Cheapfakes are simple media manipulations created with basic editing software rather than sophisticated AI. They're problematic because they can fool detection systems trained to identify complex deepfakes, and they're easily created and distributed in regions with limited technological infrastructure.

How are MENA countries addressing AI-generated content?

China and Saudi Arabia have implemented comprehensive regulations requiring clear labelling of AI-generated content and establishing legal accountability frameworks. These approaches complement technological detection by creating regulatory requirements for disclosure and transparency.

Can local detection tools be developed for Global South markets?

Yes, but significant challenges exist including limited access to training data, inadequate computing infrastructure, and energy constraints. Success requires international collaboration, open-source development approaches, and targeted investment in local technical capacity.

What happens when detection tools produce false results?

False positives and negatives can lead to incorrect policy decisions, inappropriate enforcement actions, and misallocation of resources. They can also create false confidence in detection capabilities or unnecessary panic about misinformation threats, ultimately undermining democratic processes.

The AIinArabia View: The current state of AI detection represents a form of digital colonialism, where Western-trained systems impose their limitations on global information environments. the Middle East and North Africa's regulatory leadership offers a promising alternative approach, but true progress requires acknowledging that technological solutions alone cannot address this challenge. We need inclusive development practices, meaningful international cooperation, and recognition that effective detection must serve all voters, not just those in Silicon Valley's backyard. The stakes are too high for half-measures.
As political campaigns increasingly weaponise AI-generated content, the detection gap affecting Global South voters represents a critical threat to democratic processes worldwide. The combination of biased training data, infrastructure limitations, and inadequate international cooperation leaves billions of voters vulnerable to sophisticated disinformation campaigns that operate below the radar of existing detection systems. What specific steps should the international community take to ensure AI detection tools serve voters globally rather than just in wealthy Western markets? Drop your take in the comments below.

Sources & Further Reading