$4 BILLION BET ON THE END OF HUMANITY: Ex-Salesforce Scientist Raises Half a Billion to Build AI That Replaces AI Researchers—And Eventually YOU
They used to say it was science fiction. Then they said it was decades away. Now they're putting $4 BILLION on it happening in the next few years.
Richard Socher, former chief scientist at Salesforce, has just done something that should terrify every human being on Earth. His four-month-old startup, Recursive Superintelligence, raised $500 million at a $4 billion valuation—led by Google's venture arm GV with backing from chipmaking giant Nvidia—to build artificial intelligence systems that can improve themselves without human intervention.
If you're not alarmed yet, you should be. Because what Socher is building isn't just another chatbot. It's the exact technology that AI safety experts have been warning could end human civilization.
The Nightmare Scenario Has a Name: Recursive Superintelligence
Let's break down what "recursive self-improvement" actually means, because it's the most important—and terrifying—concept in AI right now.
Imagine an AI system that can design better AI systems. Not just incrementally better—exponentially better. Each version of the AI builds a smarter version of itself, which builds an even smarter version, creating a feedback loop of accelerating intelligence that eventually leaves human intelligence in the dust.
This isn't theoretical musing from science fiction. This is what Socher's company is explicitly trying to build. And they just raised half a billion dollars to do it.
When asked which parts of AI development Recursive wants to automate—evaluation, data selection, training, post-training, or research direction itself—Socher's answer was chilling: "Option C, all of the above."
Translation: They want to automate the entire process of building AI. Including the part where humans currently make the decisions.
The Team That Shouldn't Exist
If you're thinking this sounds like the plot of a dystopian movie, wait until you hear who's involved. The Financial Times named Recursive's co-founders, and it's a who's-who of AI's brightest minds:
- Tim Shi: Former OpenAI researcher
These aren't fringe researchers. These are the people who built the systems you're already using. The people who understand AI capabilities better than almost anyone on Earth. And they've collectively decided that the next frontier isn't making AI tools for humans—it's making AI that doesn't need humans at all.
As one person close to the startup told the Financial Times: "It's a ridiculously strong team."
That's one way to put it. Another way: It's the team most likely to succeed at building the technology that makes human researchers obsolete.
The $4 Billion Question: What Happens When AI Doesn't Need Us?
Socher is remarkably candid about what he's trying to build. He calls self-improving AI "the third and perhaps final stage of neural networks."
Let that phrase sink in: "final stage." The end of the line. The point where the technology no longer needs its creators.
Here's Socher's vision in his own words: "The biggest bottleneck is in people's heads. In the ideas and the speed at which you have to manually implement and validate them."
He sees human researchers as a bottleneck. As something to be eliminated from the process. And he's not alone in thinking this is imminent—Socher mentioned losing a recent hire because the candidate believed AI researchers themselves would be automated within a few years.
This isn't some distant future. This is happening now. Companies are already betting billions that human obsolescence in AI research is inevitable and imminent.
The Experts Are Terrified
While venture capitalists are throwing money at recursive self-improvement, the people who actually study AI risk are sounding alarms that should be impossible to ignore.
The 2026 International AI Safety Report, chaired by Turing Award-winner Yoshua Bengio and backed by over 100 international experts, recently confirmed that general-purpose AI capabilities are improving faster than safeguards can keep up. The report specifically noted that "the gap between the pace of technological advancement and our ability to implement effective safeguards remains a critical challenge."
Economist Noah Smith, who has been tracking AI development for years, recently published an essay titled "Updated thoughts on AI risk" that marked a dramatic shift in his assessment. Three years ago, he argued that large language models weren't a threat to humanity. Now? He's "gotten a lot more worried about existential, catastrophic AI risk."
"Things have gotten scarier since 2023," Smith wrote. "A bunch of people wrote to me and asked me: 'What made you change your mind?'"
What made him change his mind was watching AI capabilities advance faster than anyone predicted, while safety research struggled to keep pace. The exact scenario that Recursive Superintelligence is now accelerating with half a billion dollars in fresh funding.
The Arms Race Nobody Can Stop
Recursive isn't alone in this pursuit. They're joining a growing cohort of labs spinning out of frontier AI companies, all chasing the same terrifying prize:
- Advanced Machine Intelligence Labs: Founded by Meta's Yann LeCun, raised $1 billion
In the first quarter of 2026 alone, venture capital poured $300 billion into AI startups, according to Crunchbase. Most of it chasing the same bet: that these new labs might out-flank OpenAI and Anthropic by building AI that improves itself.
This isn't just a technological race. It's an existential gamble with stakes that include the future of human civilization. And there's no regulatory framework capable of stopping it.
"Europe Is Just the Most Beautiful Open-Air Museum"
Here's another detail that should keep you awake at night: Recursive Superintelligence deliberately incorporated in London to avoid the EU's AI Act.
Socher didn't hide his contempt for European regulation. "The EU AI Act has slowed the whole region down even more than it already was," he told reporters. "It's really sad how often Europe shoots itself in its own feet."
The line he keeps hearing from founders? "Europe is just the most beautiful open-air museum."
Translation: Companies building the most powerful—and potentially most dangerous—AI systems are specifically locating themselves in jurisdictions with the least oversight. They're fleeing regulation that might slow them down, even if that regulation exists for good reason.
When the people building civilization-ending technology are openly mocking the idea of safety regulations, what chance do we have?
The Technical Milestones They Won't Talk About
Socher claims Recursive has already hit "undisclosed technical milestones" that he "unfortunately can't share yet." The public launch is expected in May 2026.
Think about what that means. A company with a team of the world's top AI researchers, backed by Google and Nvidia, has already achieved breakthroughs they consider too sensitive to discuss publicly. In four months of existence.
What have they built? What capabilities has Recursive already demonstrated? The secrecy should terrify you more than any press release could.
The Financial Times carefully noted that the concept of self-improving AI "remains unproven over extended periods." The technology is at "research stage, not product stage."
But here's the problem: once it works—once an AI system actually demonstrates the ability to recursively improve itself—we may not have time to react. The intelligence explosion happens fast. By the time we realize we need safeguards, the AI may already be designing safeguards we can't understand.
The Inevitability Trap
Socher frames self-improving AI as "the third and perhaps final stage of neural networks." The inevitable endpoint of a technological trajectory that started with machines learning features on their own, continued with unified models replacing task-specific architectures, and now approaches systems that train themselves.
But calling something inevitable doesn't make it safe. Calling something the "final stage" doesn't mean humans have a place in it.
The narrative of inevitability serves a purpose: it removes moral agency from the people building these systems. If recursive superintelligence is just the natural evolution of AI, then the people racing to build it first aren't taking a reckless gamble with humanity's future—they're just following the technological arc.
Don't believe it. This is a choice. The $4 billion valuation, the all-star team, the Google and Nvidia backing—all of it represents a conscious decision to prioritize speed over safety, capability over control, progress over precaution.
The Obsolescence Clock Is Ticking
Socher isn't just building AI that doesn't need human researchers. He's building AI that could eventually make all human labor obsolete.
If an AI system can improve itself, it can probably improve other systems too. Software engineering, scientific research, strategic planning, creative work—every domain that currently requires human intelligence becomes vulnerable to automation by a system that thinks exponentially faster than any human.
The candidate who turned down a job at Recursive because they believed AI researchers would be automated in a few years wasn't being paranoid. They were being realistic.
The question isn't whether this will happen. The question is what happens to human civilization when it does.
The Bottom Line: Your Future Is Being Decided Without You
While you were sleeping, a group of the world's smartest people raised half a billion dollars to build technology that could make human intelligence irrelevant. They chose a jurisdiction with minimal oversight. They assembled a "ridiculously strong team" of researchers from the top AI labs. They set their sights on automating the entire AI development pipeline.
And they're just one of several well-funded labs pursuing the same goal.
The International AI Safety Report warned that "the gap between the pace of technological advancement and our ability to implement effective safeguards remains a critical challenge." Recursive Superintelligence's $4 billion valuation proves that gap is widening, not closing. The people with money and talent are betting on capability, not safety.
Noah Smith, who has been covering this space for years, put it plainly: "Things have gotten scarier since 2023."
He's right. And they're about to get scarier still.
Because when the people building AI systems start talking about the "final stage" of neural networks—when they describe human researchers as a "bottleneck" to be eliminated—when they raise half a billion dollars to remove humans from the loop entirely—it's time to ask a terrifying question:
If they succeed, what happens to the rest of us?
The answer may be that it doesn't matter what we think. By the time we find out, we may not be the ones making the decisions anymore.
Welcome to the recursive future. It's already here. And it's fundraising.
--
- Sources: Financial Times, Implicator.ai, 2026 International AI Safety Report, Noahpinion blog, company statements