Mass Exodus From AI Safety Teams Signals Growing AGI Concerns
**OpenAI** has lost approximately half of its artificial general intelligence safety researchers in a dramatic wave of departures that's reshaping the AI safety landscape. The exodus, which includes co-founder John Schulman and chief scientist Ilya Sutskever, reflects deepening concerns about the company's approach to managing AGI risks. Daniel Kokotajlo, a former OpenAI safety researcher, told Fortune that these departures stem from the belief that OpenAI is "fairly close" to developing AGI but remains unprepared for the associated risks. The company's disbanded superalignment team and shifting priorities have left many safety experts questioning whether commercial pressures are overriding caution.High-Profile Departures Signal Deeper Issues
The resignation wave began with Sutskever and Jan Leike, who jointly led OpenAI's superalignment team focused on future AI system safety. OpenAI dissolved the team immediately after their departure, with Leike later joining **Anthropic** and publicly criticising the company's direction."I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," said Jan Leike, former OpenAI superalignment team leader, who now heads safety research at Anthropic.The departures weren't limited to OpenAI. In early 2026, **xAI** lost 12 staffers, while **Anthropic** saw its own high-profile resignation when Mrinank Sharma, former head of safeguards research, stepped down citing ethical concerns.
"Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions," said Mrinank Sharma in his February 2026 resignation from Anthropic.
By The Numbers
- Approximately 50% of OpenAI's AGI safety researchers have left the company
- 12 staffers departed from xAI in early 2026 over safety concerns
- OpenAI slowed hiring growth in January 2026, with CEO Sam Altman citing AI efficiency gains
- The superalignment team, focused on future AI safety, was completely disbanded after leadership departures
- Multiple resignations occurred across Anthropic, OpenAI, and xAI in the week prior to February 12, 2026
Where Safety Experts Are Heading Next
Despite leaving their previous positions, these researchers haven't abandoned AI development entirely. They're redirecting their efforts toward organisations they believe prioritise safety more effectively.For related analysis, see: [AI in the MENA region: Bridging the Risk Management Gap](/business/ai-in-asia-bridging-the-risk-management-gap).
Key departures include:- Jan Leike moved to Anthropic to lead safety research efforts
- John Schulman, OpenAI co-founder, joined Anthropic for safety-focused work
- Ilya Sutskever founded **Safe Superintelligence Inc** to develop safe AGI systems
- Multiple researchers transitioned to startups focused on responsible AI development
Regulatory Tensions and Corporate Opposition
The safety concerns extend beyond internal company dynamics to broader regulatory debates. Former OpenAI researchers have criticised the company's opposition to California's SB 1047 bill, which aims to regulate advanced AI system risks.| Company | Regulatory Stance | Key Safety Initiatives |
|---|---|---|
| OpenAI | Opposed California SB 1047 | Disbanded superalignment team |
| Anthropic | Supported safety regulations | Recruiting former OpenAI safety staff |
| xAI | Limited public stance | Facing internal safety staff departures |
For related analysis, see: [Google Ranks Best AI Models for Android Dev](/news/google-ranks-best-ai-models-for-android-dev).
Kokotajlo and other former researchers co-signed a letter to Governor Newsom describing OpenAI's regulatory opposition as a betrayal of the company's original commitment to thoroughly assess AGI's long-term risks. This tension reflects broader debates about how governments should approach AI regulation.AGI Timeline Predictions Fuel Urgency
The departing researchers' concerns are amplified by their belief that AGI development is accelerating rapidly. Before leaving OpenAI, Schulman predicted AGI would be achievable within two to three years. This timeline has created urgency around safety preparations that many experts believe are inadequate. The safety researchers point to a "chilling effect" on publishing AGI risk research within OpenAI, suggesting that commercial and communications teams now have more influence over what research gets published. This shift toward corporate messaging over scientific transparency has contributed to the growing exodus of safety-focused staff.For related analysis, see: [AI in Hiring: Safeguards Needed, Say HR Professionals](/business/ai-in-hiring-safeguards-needed-say-hr-professionals).
the Middle East and North Africa's role in this global AI safety discussion remains significant, with major partnerships like OpenAI's Stargate deal with South Korean tech giants highlighting the region's central position in AGI development infrastructure.Why are so many AI safety researchers leaving major companies?
Researchers cite concerns about companies prioritising commercial goals over safety measures, inadequate preparation for AGI risks, and restrictions on publishing safety research that could help the broader AI community understand and mitigate potential dangers.
What is the superalignment team and why does its dissolution matter?
OpenAI's superalignment team focused on ensuring future AI systems remain aligned with human values and controllable. Its dissolution after key leadership departures signals reduced institutional commitment to long-term AI safety research.
Are these researchers abandoning AI development entirely?
No, most are joining competitors like Anthropic or starting new ventures focused on safe AI development. They remain committed to advancing AI technology but seek environments that prioritise safety alongside commercial goals.
For related analysis, see: [AI-Optimised Solar: How the Gulf Is Using Machine Learning t](/energy/ai-optimised-solar-gulf-machine-learning-desert-sun).
How close is artificial general intelligence according to these experts?
Former OpenAI researchers suggest AGI could arrive within two to three years, making current safety preparations critically important. This timeline has intensified debates about whether companies are adequately prepared for AGI risks.
What role does regulation play in these departures?
Many departing researchers support stronger AI regulations like California's SB 1047 bill, while their former companies often oppose such measures. This regulatory divide reflects deeper disagreements about balancing innovation speed with safety precautions.
Further reading: OpenAI | Anthropic
THE AI IN ARABIA VIEW
The MENA AI startup scene is maturing beyond the hype cycle. What we are seeing now is a shift from AI-as-a-feature to AI-native business models built for regional needs. The founders who will win are those solving distinctly Arab-world problems, not simply localising Silicon Valley playbooks.
Key challenges include limited Arabic-language training data, talent shortages, regulatory fragmentation across jurisdictions, data privacy concerns, and the need to balance rapid AI deployment with ethical governance frameworks suited to regional cultural contexts.
### Q: How does AI In Arabia cover developments in the region?- AI In Arabia provides in-depth reporting
- analysis
- opinion on artificial intelligence developments across the Middle East
- North Africa
- spanning policy
- business
- startups
- research
- societal impact
- Analysts project the MENA AI market will exceed $20 billion by 2030
- driven by massive government investment
- growing private sector adoption
- an expanding talent pool fuelled by the region's young
- digitally-native demographic