Skip to main content
AI in Arabia
Business

Workers Are Using AI More But Trusting It Less

Employee AI usage surges 13% while trust plummets 18%, revealing a troubling gap between adoption mandates and workplace reality

· Updated Apr 17, 2026 4 min read
Workers Are Using AI More But Trusting It Less

The Workplace AI Paradox: More Usage, Less Confidence

A striking contradiction is emerging in the modern workplace: whilst AI adoption continues its relentless march forward, employee trust in these technologies is eroding. **ManpowerGroup**'s latest research reveals an 18% drop in worker confidence alongside a 13% increase in usage, suggesting the initial enthusiasm has given way to practical frustrations. This paradox reflects a deeper challenge facing organisations worldwide. Simply deploying AI tools isn't sufficient to drive meaningful productivity gains. The gap between marketing promises and operational reality has left many employees feeling disillusioned with technologies they're increasingly required to use.

The Numbers Tell a Troubling Story

Recent workplace surveys paint a picture of widespread adoption coupled with growing scepticism. In the United States, 43% of workers now use AI professionally, yet only 13% receive company training on these tools. Meanwhile, 29% of users operate AI systems without informing their managers, indicating significant trust and oversight gaps. The disconnect becomes even more apparent when examining user behaviour. While 84% of AI-using professionals report efficiency benefits, many struggle with fundamental implementation challenges. Tools frequently hallucinate information, require extensive prompt refinement, or demand complete workflow overhauls that negate promised time savings. As this trend mirrors broader concerns about corporate transparency in AI deployment, organisations must address the underlying causes of declining confidence.

By The Numbers

  • 43% of American workers used AI professionally in Q3 2025, up from 37% the previous quarter
  • Only 13% received company AI training despite widespread adoption
  • 30% of global office workers across 16 countries are classified as AI Power Users in 2026
  • 21% of non-technical workers remain reluctant to embrace AI technologies
  • 84% of AI users report efficiency benefits, yet trust continues declining

When Reality Falls Short of Promise

The enthusiasm gap stems largely from unrealistic expectations set by marketing materials and corporate communications. AI demonstrations often present oversimplified scenarios that bear little resemblance to complex workplace realities. Tabby Farrar, head of search at UK agency **Candour**, exemplifies this challenge. Her team actively pursues AI integration for efficiency gains but frequently encounters tools that hallucinate information or demand extensive prompt engineering. These experiences transform promised time savings into additional workload burdens.
"You can't have an intimidated workforce and be fully productive. Most employees are comfortable with their established routines, and AI often demands complete workflow overhauls." , Mara Stefan, VP of Global Insights, ManpowerGroup

For related analysis, see: [Securing the Sound: Sony Music Group's Stand Against Unautho](/business/securing-the-sound-sony-music-groups-stand-against-unauthorised-ai-use).

This psychological resistance compounds technical challenges. Workers must invest significant mental effort to adapt familiar processes, often questioning whether the disruption justifies the benefits. The result is a workforce simultaneously using and resenting the technologies they're expected to embrace. Understanding broader workplace anxieties about AI replacement helps explain this resistance.

The Training Deficit Crisis

Perhaps the most damaging factor in declining AI confidence is inadequate organisational support. **ManpowerGroup**'s research found that 56% of respondents received no recent AI training, whilst 57% lacked access to relevant mentorship programmes. This educational vacuum leaves employees struggling with powerful technologies they neither understand nor trust. Without proper guidance, workers resort to trial-and-error approaches that reinforce negative perceptions of AI reliability and usefulness.
"We often assume that more technology means less connection. But our data tells a different story. The employees most embedded in AI workflows are also the ones most engaged in learning and have better team relationships." , Janet Pogue McLaurin, Global Director of Workplace Research, Gensler

For related analysis, see: [Sam Altman Wants to Tax His Own AI. the MENA region Should B](/voices/sam-altman-openai-ai-governance-blueprint-asia).

Successful organisations invest heavily in comprehensive training programmes that transform anxiety into competence. They create environments where experimentation is encouraged and failures become learning opportunities rather than sources of frustration. Companies exploring strategic AI implementation must prioritise human-centred approaches that build confidence alongside capability.
Training Approach Employee Confidence Adoption Success Rate Productivity Impact
Comprehensive programmes High 78% Significant gains
Basic orientation Moderate 45% Marginal improvements
Self-directed learning Low 23% Mixed results
No formal training Very low 12% Often negative

Building Bridges to Better AI Adoption

Forward-thinking organisations are developing strategies that address both technical and psychological barriers to AI adoption. These approaches recognise that successful implementation requires more than software deployment. Key elements include:

For related analysis, see: [AI Enters Middle East's Kitchen With an $11 Million Bet](/business/ai-enters-asia-kitchen-restaurant-transformation).

  1. Appointing internal AI champions who understand both technology capabilities and team dynamics
  2. Building buffer time into projects for AI experimentation and learning
  3. Framing new tools as "test and learn" initiatives rather than productivity mandates
  4. Providing regular check-ins and open forums for discussing challenges and successes
  5. Creating tailored AI solutions that address specific organisational needs rather than generic applications
At **Candour**, Farrar's team exemplifies this thoughtful approach. They've developed a "Gemini Gem" trained on brand guidelines to generate client-ready quotes, demonstrating AI's potential when properly customised. This success story contrasts sharply with their earlier frustrations with generic tools. The contrast between successful and failed implementations often relates to whether organisations view generational differences in AI adoption as opportunities rather than obstacles.

Why are employees losing trust in AI despite increased usage?

The gap between marketing promises and operational reality creates frustration. Many tools require extensive refinement or produce unreliable outputs, leading to negative experiences that undermine confidence even as usage mandates increase.

What role does training play in AI adoption success?

Comprehensive training programmes dramatically improve adoption rates and user confidence. Without proper education, employees struggle with powerful technologies they neither understand nor trust, creating negative feedback loops.

For related analysis, see: [UAE's AI Ambition: Expanding Semiconductor Capacity to Ride ](/news/uae-ai-ambition-expanding-semiconductor-capacity).

How can organisations bridge the AI confidence gap?

Successful strategies include appointing internal champions, building experimentation time into workflows, providing ongoing support, and creating tailored solutions that address specific organisational needs rather than deploying generic tools.

What makes some AI implementations more successful than others?

Success correlates with human-centred approaches that prioritise user education, psychological comfort, and practical utility. Organisations that frame AI as workflow enhancement rather than replacement see better adoption rates.

Should companies be concerned about hidden AI usage among employees?

Yes, when 29% of users operate AI without management awareness, it indicates serious governance gaps. This shadow usage often reflects inadequate training and unclear policies rather than malicious intent.

Further reading: Reuters | OECD AI Observatory

THE AI IN ARABIA VIEW

This development reflects the broader momentum building across the Arab world's AI ecosystem. The pace of change is accelerating, and the gap between regional ambition and global competitiveness is narrowing. What matters now is sustained execution, not just announcements, and the willingness to measure progress against outcomes rather than investment figures alone.

The AIinArabia View: This confidence crisis represents a critical inflection point for workplace AI adoption. Organisations that continue pushing deployment without addressing fundamental trust issues will find themselves with expensive tools and frustrated workforces. The solution isn't slowing AI adoption but accelerating investment in human-centred implementation strategies. Companies that prioritise employee education, provide adequate support systems, and create psychologically safe environments for AI experimentation will emerge as productivity leaders. Those that ignore the confidence gap risk creating permanent resistance to technologies that could otherwise transform their operations.
The path forward requires acknowledging that AI adoption is fundamentally a human challenge wrapped in a technological solution. Success depends not just on choosing the right tools but on creating the right conditions for people to embrace and excel with them. Regional trust patterns across the Middle East and North Africa suggest this challenge extends far beyond individual organisations, requiring industry-wide commitment to better implementation practices. What specific challenges has your organisation encountered when implementing AI tools, and how have you addressed employee concerns about reliability and usefulness? Drop your take in the comments below. ## Frequently Asked Questions ### Q: How are businesses in the Arab world adopting generative AI?

Adoption is accelerating across sectors, with enterprises deploying generative AI for content creation, customer service automation, code generation, and internal knowledge management. The Gulf's digital-first business culture is proving to be a strong tailwind for adoption.

### Q: What are the biggest challenges facing AI adoption in the Arab world?

Key challenges include limited Arabic-language training data, talent shortages, regulatory fragmentation across jurisdictions, data privacy concerns, and the need to balance rapid AI deployment with ethical governance frameworks suited to regional cultural contexts.

### Q: How does AI In Arabia cover developments in the region?
  • AI In Arabia provides in-depth reporting
  • analysis
  • opinion on artificial intelligence developments across the Middle East
  • North Africa
  • spanning policy
  • business
  • startups
  • research
  • societal impact

Sources & Further Reading