Skip to main content
AI in Arabia
News

"I’m deeply uncomfortable with these decisions" - Anthropic's CEO

Anthropic's CEO admits deep discomfort with AI power concentration as the company warns of the first large-scale AI cyberattack executed autonomously.

· Updated Apr 17, 2026 4 min read
"I’m deeply uncomfortable with these decisions" - Anthropic's CEO

Anthropic's CEO Calls for Urgent AI Regulation Amid Safety Concerns

**Anthropic's** chief executive Dario Amodei has delivered one of the strongest calls yet for AI regulation from within the industry. In a candid November 2024 interview on CBS News' 60 Minutes, he expressed deep discomfort with the concentration of power over AI development in the hands of a few technology leaders.
"I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei stated. "And this is one reason why I've always advocated for responsible and thoughtful regulation of the technology."
When pressed by host Anderson Cooper about democratic legitimacy, Amodei's response was stark: "Who elected you and Sam Altman?" Cooper asked. "No one. Honestly, no one," came the reply. This admission highlights a fundamental tension in AI governance: who should control technologies that could reshape society? Unlike traditional industries where market forces and regulation evolved gradually, AI development is racing ahead faster than oversight mechanisms can adapt.

The First AI Cyberattack Changes Everything

Anthropic's concerns gained urgency following what the company described as "the first documented case of a large-scale AI cyberattack executed without substantial human intervention." This incident, disclosed ahead of the CBS interview, represents a watershed moment for AI security. The attack validates earlier warnings from cybersecurity experts, including former **Mandiant** CEO Kevin Mandia, who predicted such AI-agent attacks would materialise sooner than expected. For businesses across the Middle East and North Africa, this development signals a new era of cyber threats that traditional security measures may struggle to address.

By The Numbers

  • Anthropic reached a $380 billion valuation in its latest funding round
  • The company donated £16 million to Public First Action, a super PAC focused on AI safety
  • All 50 US states introduced AI-related legislation this year, with 38 adopting safety measures
  • Anthropic's Claude chatbot achieved a 94% political even-handedness rating
  • **OpenAI** maintains an estimated $500 billion valuation, ahead of Anthropic
The competitive landscape is intensifying. Amodei previously warned that sitting "on the sidelines" would mean Anthropic would "lose and stop existing as a company." This pressure creates an inherent conflict between safety priorities and commercial survival.

Global Regulatory Fragmentation Creates Compliance Challenges

The regulatory landscape varies dramatically across regions. The **EU AI Act** sets a global benchmark for comprehensive AI regulation, whilst the United States lacks federal AI-specific legislation. This fragmentation poses significant challenges for companies operating internationally.
Region Regulatory Approach Implementation Status
European Union Comprehensive AI Act Fully implemented
United States State-by-state patchwork Fragmented adoption
the UAE Model AI Governance Framework Under development
China Algorithm regulation focus Strict enforcement
In the MENA region, regulatory approaches differ markedly. the UAE is pioneering agentic AI governance frameworks, whilst Morocco has enacted the MENA region's first comprehensive AI law. These varying requirements create operational complexity for multinational AI companies. The challenges extend beyond compliance. Cultural nuances around AI ethics vary significantly across MENA markets, requiring tailored approaches for content moderation and user interaction. Companies must navigate not just regulatory differences but also social expectations around AI behaviour.

For related analysis, see: [The Thirst of AI: A Looming Water Crisis in Middle East](/news/the-thirst-of-ai-a-looming-water-crisis-in-asia).

Safety Theatre or Genuine Commitment?

Anthropic's founding story centres on AI safety. Amodei departed **OpenAI** in 2021 due to disagreements over safety priorities, taking several researchers with him to establish a competitor focused explicitly on safe AI development.
"There was a group of us within OpenAI that had a very strong belief in two things," Amodei explained to Fortune. "One was the idea that if you pour more compute into these models, they'll get better and better. And the second was that you needed something in addition to just scaling the models up, which is alignment or safety."
Anthropic has implemented several safety measures:
  • Constitutional AI approach that imbues models with values rather than strict rules
  • Responsible Scaling Policy pledging not to release AI capable of catastrophic harm
  • Regular safety reports documenting model vulnerabilities and limitations
  • Transparency initiatives including political neutrality testing
  • Financial support for AI safety research organisations
However, critics question whether these measures constitute genuine safety commitment or strategic positioning. **Meta's** former chief AI scientist Yann LeCun accused Anthropic of "regulatory capture," suggesting the company uses safety warnings to influence legislation against open-source competitors. Recent internal tensions underscore these debates. AI safety researcher Mrinank Sharma resigned from Anthropic, citing concerns that "the world is in peril" and expressing frustration with balancing values against commercial pressures.

For related analysis, see: [Apple's AI Plan: Gemini Today, Siri Tomorrow?](/news/apple-s-ai-plan-gemini-today-siri-tomorrow).

The Economic Pressure Cooker

The fundamental tension between safety and profitability creates ongoing challenges for AI companies. Amodei acknowledges this pressure candidly, noting that Anthropic faces "incredible commercial pressure" whilst trying to maintain safety standards that exceed industry norms. Brian Jackson from **Info-Tech Research Group** explains the financial reality: unlike traditional tech services, large language models carry substantial per-query costs. The infrastructure requirements for AI companies including data centres, GPUs, and cloud computing create ongoing capital expenditure that demands revenue growth.
"As AI scales and as more usage grows, they're not necessarily going to get to that profitability as easily or as quickly, because the cost per prompt is so high," Jackson observed.
This economic reality affects the entire industry. Major tech companies continue pouring billions into Anthropic, whilst MENA competitors like Tencent launch new reasoning models to capture market share. The competitive intensity means companies must balance innovation speed with safety considerations. Sitting still risks market irrelevance, but moving too fast risks catastrophic failures that could damage the entire industry's reputation.

Three Horizons of AI Risk

For related analysis, see: [World Government Summit Declares Middle East the New AI Epic](/business/boao-forum-asia-ai-epicentre-106-billion-2026).

Amodei categorises AI risks across three distinct timelines, each requiring different regulatory approaches: **Short-term risks** include bias and misinformation, already impacting public discourse and democratic processes. These immediate concerns require swift regulatory intervention and industry self-regulation. **Medium-term threats** involve AI systems generating harmful information using enhanced scientific knowledge. This includes potential creation of biological weapons or sophisticated cyber attacks, as demonstrated by Anthropic's recent security incident. **Long-term existential risks** centre on AI potentially removing human agency from critical systems. These concerns align with warnings from AI pioneer Geoffrey Hinton about systems that could outsmart and control humans within the next decade.
The AIinArabia View: Amodei's uncomfortable honesty about AI governance highlights a critical moment for the industry. Whilst his calls for regulation are commendable, the fundamental tension between safety and commercial pressure remains unresolved. the Middle East and North Africa's fragmented regulatory landscape creates both opportunities and challenges. Countries like the UAE and Morocco are pioneering governance frameworks that could influence global standards. However, the region's economic importance means any regulatory missteps could either accelerate or derail responsible AI development worldwide. We believe the MENA region must take a leading role in harmonising AI governance whilst preserving innovation incentives.

Who should regulate AI development globally?

A combination of international bodies, national governments, and industry self-regulation is needed. No single entity can effectively govern a technology with such broad implications across borders and sectors.

Can AI companies truly prioritise safety over profits?

The current venture capital model creates inherent tensions. Companies need sustainable business models that don't compromise safety, potentially requiring new funding structures or regulatory frameworks that support responsible development.

For related analysis, see: [GO DEEPER: Is AI Another Dotcom Bubble Waiting To Burst?](/business/do-deeper-is-the-ai-boom-in-asia-just-another-dot-com-bubble).

How do cultural differences affect AI regulation in the MENA region?

MENA markets have varying expectations around privacy, government oversight, and social responsibility. China's structured approach contrasts sharply with the UAE's market-friendly frameworks, requiring companies to adapt strategies by jurisdiction.

What makes Anthropic's safety approach different from competitors?

Anthropic employs Constitutional AI, which trains models using principles rather than rules. The company also publishes detailed safety reports and maintains policies against releasing potentially catastrophic systems.

Why are AI operational costs so high compared to traditional tech services?

Large language models require massive computational resources for both training and inference. Unlike web searches that cost fractions of a penny, each AI interaction requires significant processing power, creating ongoing expenses that challenge traditional tech economics.

Further reading: Anthropic | OECD AI Observatory

THE AI IN ARABIA VIEW

AI governance in the Arab world is evolving rapidly, often outpacing Western regulatory frameworks in speed of implementation if not always in depth. The region has an opportunity to become a model for agile, principles-based AI regulation that balances innovation incentives with societal safeguards.

The question remains whether the current model of AI development can sustainably balance innovation with safety. As Amodei continues warning about AI firms posing risks to humanity, the industry faces mounting pressure to resolve these fundamental tensions before they spiral beyond control. Can an industry built on exponential growth truly prioritise long-term safety over short-term gains? What role should MENA governments play in shaping global AI governance standards? Drop your take in the comments below. ## Frequently Asked Questions ### Q: What is the regulatory landscape for AI in the Arab world?

The MENA region is developing a patchwork of AI governance frameworks. The UAE, Saudi Arabia, and Bahrain have been early movers with dedicated AI strategies and regulatory sandboxes, whilst other nations are still formulating their approaches.

### Q: What are the biggest challenges facing AI adoption in the Arab world?

Key challenges include limited Arabic-language training data, talent shortages, regulatory fragmentation across jurisdictions, data privacy concerns, and the need to balance rapid AI deployment with ethical governance frameworks suited to regional cultural contexts.

### Q: How does AI In Arabia cover developments in the region?
  • AI In Arabia provides in-depth reporting
  • analysis
  • opinion on artificial intelligence developments across the Middle East
  • North Africa
  • spanning policy
  • business
  • startups
  • research
  • societal impact

Sources & Further Reading