The Great AI Brain Drain: Why DeepMind, Meta, and OpenAI Stars Are Fleeing to Startups—and What It Means for the Future of Artificial Intelligence
Something extraordinary is happening inside the world's most advanced AI laboratories. The people who built the foundational systems defining our current era of artificial intelligence are walking out the door—and they are not going to competitors. They are starting their own companies, raising unprecedented sums of money within months of incorporation, and pursuing research directions that their former employers have deemed too risky, too long-term, or too commercially uncertain to prioritize.
David Silver, the DeepMind researcher who taught AI to master chess and Go through pure reinforcement learning, just raised $1.1 billion at a $5.1 billion valuation for a company founded mere months ago. Yann LeCun, Meta's chief AI scientist and winner of the Turing Award, left his post to launch AMI Labs, which raised $1.03 billion in March. Tim Rocktäschel, another DeepMind principal scientist, is reportedly raising up to $1 billion for Recursive Superintelligence. And they are not alone. In 2026 alone, venture capitalists have funneled $18.8 billion into AI startups founded since the beginning of 2025—on track to surpass the $27.9 billion invested in startups launched since 2024.
This article examines why the brightest minds in AI are leaving the safety of Big Tech, what they are building, and how this migration will reshape the competitive landscape, research priorities, and commercial trajectory of artificial intelligence.
The Exodus: Who Is Leaving and Where They Are Going
The brain drain is not concentrated in one company or one geography. It is a systemic shift affecting every major AI laboratory.
From DeepMind to Independence
Google DeepMind, the London-based lab that produced AlphaGo, AlphaFold, and Gemini, is experiencing the most visible exodus. David Silver's departure to found Ineffable Intelligence is the most dramatic example. Silver spent over a decade at DeepMind leading the reinforcement learning team, where he developed AlphaZero—a program that learned chess, shogi, and Go at superhuman levels through nothing but self-play, without studying a single human game record.
Silver's new company aims to extend that philosophy to general intelligence. Ineffable Intelligence is building what it calls a "superlearner"—an AI system that discovers knowledge and skills entirely through its own experience, without relying on the massive datasets of human-generated text, images, and code that fuel today's large language models. The company's stated ambition is breathtaking: "If successful, this will represent a scientific breakthrough of comparable magnitude to Darwin: where his law explained all Life, our law will explain and build all Intelligence."
Tim Rocktäschel, DeepMind's former principal scientist, founded Recursive Superintelligence in the United Kingdom and is reportedly in talks to raise between $500 million and $1 billion. His focus appears to be on AI systems that can recursively improve themselves—a direction that raises both extraordinary promise and profound safety questions.
From Meta to AMI Labs
Yann LeCun's departure from Meta sent shockwaves through the industry precisely because LeCun was not a rank-and-file researcher. As Meta's chief AI scientist and winner of the 2018 Turing Award for his work on deep learning, LeCun was one of the most influential figures in the field. His AMI Labs, co-founded after leaving Meta, raised $1.03 billion at a $3.5 billion pre-money valuation in March 2026.
AMI Labs is pursuing what LeCun has long advocated: world models—AI systems that learn from continuous real-world data rather than static internet datasets. The company's thesis is that current AI excels at content generation but fails at grounding, causality, and reliable behavior in physical environments. As AI moves beyond screens into robotics, healthcare, and industrial applications, these limitations become critical.
From OpenAI and Anthropic to New Ventures
The exodus extends to American labs as well. Periodic Labs, founded by former OpenAI and DeepMind staff, raised $300 million in September 2025 for autonomous laboratory systems. Ricursive Intelligence, founded by former Anthropic and Google researchers Anna Goldie and Azalia Mirhoseini, raised $335 million across two rounds for AI-powered chip design. Humans&, launched by former Anthropic and xAI employees in October 2025, raised $480 million in January 2026 for reinforcement learning systems.
Why They Are Leaving: Four Forces Driving the Exodus
Understanding this migration requires looking beyond individual career decisions to the structural forces pushing researchers out of Big Tech.
1. The Tyranny of Benchmarks
Inside frontier labs like OpenAI, Google DeepMind, and Anthropic, research priorities are increasingly dictated by the need to win on standardized benchmarks. GPT-5.5 must outperform Claude Mythos on MMLU. Gemini must exceed GPT on HumanEval. These benchmarks have become the currency of competitive positioning, investor reporting, and marketing claims.
But benchmarks create narrow incentives. They reward incremental improvements on well-defined tasks rather than exploratory research into fundamentally different approaches. As Elise Stern, managing director at French venture capital firm Eurazeo, told CNBC: "When you're in a race, you narrow focus. That creates a vacuum. Entire areas of research, like new architectures, agents, interpretability and vertical models, are being deprioritised, not because they don't matter, but because they don't win the immediate race."
For researchers like David Silver, whose work on AlphaZero demonstrated that radically different learning paradigms could outperform human knowledge, this narrowing is intellectually suffocating. Startups offer the freedom to pursue approaches that might fail on current benchmarks but could redefine what AI is capable of.
2. The Scaling Skeptics
A growing faction within AI research is questioning whether the current large language model paradigm—bigger models, more data, more compute—can actually reach the next level of capability. HV Capital partner Alexander Joël-Carbonell articulated this skepticism to CNBC: "Inside the large foundational labs, the pressure to deliver benchmark performance and maintain rapid release cycles leaves limited room for genuinely exploratory research, particularly outside the dominant LLM paradigm."
The skeptics point to diminishing returns. Each doubling of model size requires exponentially more compute for marginally better performance. Hallucinations persist despite scaling. Reasoning remains brittle. Common sense fails in ways that suggest fundamental architectural limitations rather than insufficient scale.
Startups like Ineffable Intelligence and AMI Labs are betting that the path to more capable AI runs through different architectures, learning paradigms, and data sources—not just bigger versions of what already exists.
3. Commercial Pressure vs. Scientific Freedom
As AI companies have raised billions in funding and achieved multi-hundred-billion-dollar valuations, the pressure to commercialize has intensified. Research that does not have a clear path to revenue within 12-24 months faces funding cuts. Long-term safety research, interpretability studies, and exploratory architecture work struggle for resources.
Anna Goldie, co-founder of Ricursive Intelligence and former Anthropic and DeepMind researcher, explained the startup advantage to CNBC: "For chipmakers to trust us with their most valuable IP, we have to be Switzerland, and that wouldn't be possible if we were at Google." Independence from Big Tech's competitive dynamics allows startups to serve customers neutrally and pursue research without worrying about how it affects a parent company's product roadmap.
4. The Economics of Startup Independence
Perhaps the most pragmatic driver is financial. The venture capital environment for AI startups has reached unprecedented levels of generosity. Seed rounds that once meant $2-5 million now routinely exceed $100 million. The term "coconut round"—a tongue-in-cheek escalation of "seed round" coined by Bloomberg—has entered the lexicon to describe these mega-seed financings.
For researchers with established reputations, the risk-adjusted return of starting a company has shifted dramatically. Inside Big Tech, even star researchers are employees. As founders, they can build billion-dollar companies, control their research direction, and retain meaningful equity. The $1.1 billion raised by Ineffable Intelligence and the $1.03 billion raised by AMI Labs are not just funding—they are validation that the market will pay extraordinary premiums for research independence.
What They Are Building: The New Research Frontier
The startups emerging from this brain drain are not building slightly better chatbots. They are pursuing directions that could redefine AI's capabilities and limitations.
Reinforcement Learning Without Human Data
David Silver's Ineffable Intelligence represents the purest expression of this new direction. The company's "superlearner" concept aims to replicate the AlphaZero methodology across general intelligence: AI systems that learn entirely through interaction with their environment, discovering strategies and knowledge that no human has ever articulated.
The implications are profound. Current AI systems are fundamentally limited by the quality and scope of human-generated training data. They can only know what humans have written, photographed, or coded. A system that learns from pure experience could discover physical laws, mathematical truths, and strategic insights that exist beyond human knowledge.
The challenges are equally significant. Reinforcement learning works well in constrained environments like games, where success is clearly defined and feedback is immediate. General intelligence operates in open-ended domains where goals are ambiguous, feedback is delayed, and the action space is effectively infinite. Whether Silver's approach can scale beyond structured environments is the central scientific question his company must answer.
World Models and Grounded AI
AMI Labs' focus on world models addresses a different limitation of current AI: its lack of grounding in physical reality. Large language models process text about the world but have never experienced it. They can describe gravity but have never dropped an object. They can discuss causality but have never manipulated cause and effect.
World models aim to give AI systems internal simulations of how the world works—physical, social, and causal. LeCun has argued for years that such models are essential for AI to achieve human-level understanding and reliable behavior in real-world settings. AMI Labs is attempting to build these models at scale, using continuous streams of real-world sensor data rather than static internet corpora.
The commercial applications are immediate: robotics that can navigate unstructured environments, autonomous systems that can reason about physical consequences, and AI assistants that understand not just what users say but the physical and social context in which they say it.
AI for Chip Design
Ricursive Intelligence's focus on chip design automation represents a pragmatic application of AI's reasoning capabilities. As AI systems grow larger, the chips that power them become more complex and expensive to design. Goldie and Mirhoseini, who contributed to DeepMind's AlphaChip project, are applying the same learning-from-experience methodology to semiconductor design.
The strategic significance is enormous. The AI industry's progress is currently constrained by chip availability and cost. If AI can design better chips faster than humans, it could break this bottleneck—and the company that enables that breakthrough would be positioned at the center of the next phase of AI expansion.
Autonomous Laboratories
Periodic Labs' vision of autonomous laboratories—AI systems that can design, execute, and interpret scientific experiments without human supervision—represents perhaps the most ambitious application. If successful, such systems could accelerate scientific discovery by orders of magnitude, testing hypotheses and exploring parameter spaces faster than human researchers.
The Funding Frenzy: Understanding the "Coconut Round" Phenomenon
The scale of funding flowing to these startups defies traditional venture capital logic. Ineffable Intelligence achieved a $5.1 billion valuation before shipping a product. AMI Labs reached $3.5 billion pre-money. These are not gradual escalations—they are instant decacorn status based almost entirely on founder reputation and research vision.
Several factors explain this phenomenon:
Scarcity of Proven AI Talent
There are perhaps two dozen researchers in the world with track records of building transformative AI systems. When they become available, investors compete aggressively for access. The $1.1 billion raised by Ineffable Intelligence reflects not just belief in the specific vision but conviction that David Silver's involvement makes any AI company he leads worth billions by default.
Fear of Missing Out on the Next OpenAI
Venture capital firms that missed OpenAI's early rounds are determined not to repeat the mistake. The potential returns from being early in the next transformative AI company justify extraordinary entry prices. Sequoia Capital, Lightspeed Venture Partners, Index Ventures, Google, and Nvidia all participated in Ineffable's round—an all-star lineup reflecting competitive pressure as much as analytical conviction.
Strategic Investments from Incumbents
Google and Nvidia's participation in funding rounds for companies founded by their former employees is particularly notable. These are not purely financial investments—they are hedges against technological disruption. If reinforcement learning without human data proves to be the path to more capable AI, Google and Nvidia want ownership stakes in the companies pioneering it, even if those companies compete with their existing products.
Sovereign Interest
The British Business Bank and the UK's Sovereign AI fund invested in Ineffable Intelligence, reflecting national strategic interest in AI capability. The UK government explicitly frames this as backing "breakthrough AI that can discover new knowledge." As AI becomes central to economic competitiveness and national security, government funds are joining private capital in supporting domestic AI champions.
The Implications: How This Reshapes the AI Landscape
The brain drain from Big Tech to startups will have consequences that extend far beyond the individual companies involved.
Fragmentation of the AI Stack
The current AI industry is relatively concentrated: a handful of companies (OpenAI, Google, Anthropic, Meta) produce the frontier models that power most applications. The startup exodus is creating a more fragmented landscape where specialized models compete with general-purpose ones, and different architectural approaches challenge the dominance of scaling large language models.
This fragmentation could accelerate innovation by increasing the diversity of approaches being pursued. It could also complicate the ecosystem for enterprises, which must evaluate a wider range of models and providers.
The London AI Hub
The concentration of these startups in London—Ineffable Intelligence, Recursive Superintelligence, and Jeff Bezos's Project Prometheus are all establishing significant presence there—suggests the emergence of a genuine counterweight to Silicon Valley in AI development. DeepMind's continued presence after its Google acquisition, combined with favorable UK government funding and a deep pool of European research talent, is creating a self-reinforcing ecosystem.
Safety and Governance Challenges
The companies being founded by departing researchers are explicitly pursuing more powerful AI systems through novel approaches. Reinforcement learning without human data, recursive self-improvement, and autonomous experimentation all raise safety concerns that the research community has not fully addressed.
Unlike OpenAI and Anthropic, which have invested heavily in safety research and public governance frameworks, these startups are new entities with unproven safety cultures. The concentration of advanced AI development in well-resourced, well-governed labs has been a deliberate strategy to manage existential risk. Its dispersion into a larger number of smaller, less transparent companies complicates that strategy.
Talent Competition Intensifies
The funding windfall enables these startups to hire aggressively from their founders' former employers. Goldie explicitly told CNBC that Ricursive Intelligence "got the core AlphaChip team back together, and that involved hiring some of our old collaborators." Other team members came from Google, Anthropic, Nvidia, Apple, and xAI.
This creates a self-reinforcing cycle: funding enables hiring, hiring accelerates development, development attracts more funding. Big Tech labs may find themselves in genuine talent competition with well-funded startups for the first time.
What to Watch: The Key Questions for 2026 and Beyond
As this migration unfolds, several questions will determine its ultimate significance:
Can these startups deliver on their ambitious visions?
Raising $1 billion is not the same as building transformative AI. The gap between research vision and product reality is where most startups fail. The next 18-24 months will reveal whether these companies can translate their founders' reputations into working systems.
Will Big Tech respond by improving research conditions?
If the brain drain threatens their competitive position, major labs may improve conditions for long-term research, offer more attractive equity packages, or create internal incubators that give researchers startup-like autonomy. Meta, Google, and OpenAI all have incentives to prevent further departures.
How will safety governance adapt?
The dispersion of advanced AI development into more startups creates governance challenges. Will regulatory frameworks emerge to ensure safety research keeps pace with capability development? Or will the competitive pressure to ship products override caution?
Which architectural approach will prove most scalable?
The startups are betting on different technical directions—reinforcement learning, world models, autonomous experimentation—while Big Tech continues scaling language models. The market will ultimately test these approaches, and the winning architecture may be one that has not yet been invented.
Conclusion
The great AI brain drain of 2026 is not merely a story of individuals changing employers. It is a signal that the AI industry is entering a new phase—one where the dominant paradigm is being questioned, where research freedom commands a billion-dollar premium, and where the next breakthrough may come from a startup founded last month rather than a lab established a decade ago.
For enterprises and investors, the message is clear: the AI landscape is becoming more diverse, more competitive, and more unpredictable. The companies that defined the current era of AI are no longer the only places where the future is being invented. The researchers who built the systems we use today are now building the systems that may replace them.
The concentration of capital, talent, and ambition in these new ventures suggests that the next transformative leap in AI capability may come not from incremental improvements to existing models but from fundamentally different approaches pursued by researchers who refused to accept the boundaries of current paradigms.
David Silver, Yann LeCun, and their peers are betting their reputations, their careers, and billions of dollars of other people's money that there is a better way to build intelligence. Whether they are right will determine the shape of AI—and perhaps civilization itself—for decades to come.
--
- Published on April 29, 2026 | Category: Startups
Sources: CNBC, TechCrunch, Bloomberg, Financial Times, Wired, Dealroom