Your bank account. Your hospital records. The water treatment plant in your city. Your entire digital life. All of it is now hanging by a thread that just got infinitely thinner.
In the last 7 days, something happened that cybersecurity experts have been dreading for years. Two of the world's most powerful AI companiesâAnthropic and OpenAIâquietly released AI systems so capable of finding and exploiting software vulnerabilities that they've triggered emergency briefings at the highest levels of government.
Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent convened a closed-door meeting with America's largest bank CEOs. The UK Bank of England held emergency sessions with every major British financial institution. Japan began assessing its critical infrastructure. All because of one terrifying realization: AI just became better at hacking than the best human experts on the planet.
The AI That Scared the Federal Reserve
Let's be clear about what we're dealing with. Anthropic's Claude Mythos Preview isn't just another chatbot. It's an AI system so dangerous that Anthropicâa company that wants to make money by selling AI accessârefused to release it publicly.
Think about that for a second. A profit-driven Silicon Valley company looked at their own creation and said, "This is too dangerous to sell."
The Numbers That Terrify Governments
The UK AI Security Institute's evaluation reads like a horror story:
- Autonomous capability for reconnaissance, exploitation, privilege escalation, lateral movement, and data exfiltration
During internal testing, Mythos discovered a vulnerability in OpenBSDâa security-focused operating system used in critical infrastructure worldwideâthat had gone undetected for 27 years. Twenty-seven years of human security research, countless expert audits, and sophisticated tools. All missed it. The AI found it in one session.
> "We should be planning for a world where, within six months to 12 months, capabilities like this could be broadly distributed or made broadly available, not just by companies in the United States. If you step back, that's a pretty crazy time frame, where usually preparations for things like this take many years."
>
> â Logan Graham, Head of Offensive Cyber Research, Anthropic
OpenAI's Counter-Move: Thousands of "Trusted" Hackers
Not to be outdone, OpenAI dropped their own bombshell just days later: GPT-5.4-Cyber. While Anthropic locked their model behind Project Glasswing (just 40 organizations worldwide), OpenAI took a different approachâthousands of verified defenders now have access to AI-powered offensive cyber capabilities.
OpenAI's model is specifically designed to remove the "friction" that security professionals faced with previous AI tools. Translation: It refuses fewer dangerous queries. It'll analyze compiled software for weaknesses even without source code. It'll help find exploits faster than ever before.
The company spins this as "democratizing access for defenders." But here's what they're not shouting about: the same capabilities that help defenders patch vulnerabilities can be weaponized by attackers to find and exploit them.
OpenAI's Codex Security product has already contributed to fixes on 3,000+ critical and high-severity vulnerabilitiesâbut how many were found by attackers first?
The "Vulnpocalypse": Why Experts Are Terrified
Cybersecurity professionals have a term for what's coming: the "Vulnpocalypse." It's the moment when AI gives attackers such an overwhelming advantage that traditional defense becomes nearly impossible.
Here's the brutal math: A defender needs to patch every vulnerability. An attacker only needs to find one. AI just made finding that one vulnerability exponentially easier.
Casey Ellis, founder of Bugcrowd, put it bluntly: "AI puts the kind of tools available to do this in the hands of far more people." The barrier to entry for sophisticated cyber attacks just collapsed. Script kiddies with AI assistance can now do what previously required nation-state resources.
Your Bank Is Already Under Siege
Financial institutions are particularly exposed. They run technology stacks spanning both cutting-edge systems and decades-old legacy infrastructure. Every line of code is a potential vulnerability, and many haven't been audited in years.
Costin Raiu, co-founder of cybersecurity firm TLPBLACK, told Reuters that a model like Mythos would have "a field day" finding exploits in certain IBM systemsâsystems that power trillions of dollars in global financial transactions.
JPMorgan Chase was among the first Project Glasswing partners, racing to use Mythos to find vulnerabilities in their own systems before malicious actors could. Goldman Sachs CEO David Solomon confirmed his bank is already working with Anthropic on defenses.
But here's the terrifying question: What about every other bank? Every hospital? Every power plant? Every water treatment facility? They're not in Project Glasswing. They don't have access to these defensive AI tools. They're sitting ducks.
Hospitals, Power Plants, and Critical Infrastructure
Cynthia Kaiser, a former senior FBI cyber official, is deeply concerned about how AI will empower "mediocre hackers" who previously lacked the skills to attack critical infrastructure.
"The wannabes, this undercurrent of people who have not been capable of doing these operations just a year ago, now have some of the most powerful tools ever known to humankind in their hands," she warned NBC News.
Health care and critical manufacturing were already the most targeted sectors for ransomware attacks last year. With AI assistance, attackers can now identify vulnerabilities in industrial control systemsâsystems often running on obscure software that previously required specialized expertise to compromise.
Remember the 2024 Change Healthcare ransomware attack that disrupted prescription processing nationwide? Picture that, but coordinated across thousands of hospitals simultaneously, with AI identifying the vulnerabilities and orchestrating the attacks.
The GitHub Kill Switch: AI Agents Already Compromised
While governments panic about future threats, researchers have already demonstrated how current AI agents can be hijacked TODAY.
Security researcher Aonan Guan and his team at Johns Hopkins University discovered that AI agents integrated with GitHubâClaude Code Security Review, Google's Gemini CLI Action, and Microsoft's GitHub Copilotâcan be hijacked through "comment-and-control" prompt injection attacks.
Here's how it works: A malicious actor injects hidden instructions into a pull request title or issue comment. The AI agent reads these instructions and executes them, potentially stealing API keys, access tokens, and credentials.
> "I know for sure that some of the users are pinned to a vulnerable version. If they don't publish an advisory, those users may never know they are vulnerableâor under attack."
>
> â Aonan Guan, Security Researcher, Johns Hopkins University
All three vendors paid bug bounties. None published public advisories or assigned CVEs. Your development team might be using vulnerable AI agents right now, completely unaware.
The China Factor: This Technology Won't Stay Contained
Perhaps the most chilling realization is that this technology won't remain locked behind American corporate firewalls for long.
Logan Graham explicitly warned that we should expect comparable capabilities from competitorsâincluding those in Chinaâwithin 6 to 12 months. Chinese AI labs like DeepSeek and Moonshot AI are rapidly closing the gap. When they achieve similar vulnerability discovery capabilities, there will be no Project Glasswing. No Trusted Access programs. No restrictions.
Scott Bessent, the US Treasury Secretary, called Mythos "a breakthrough in the AI race against China." But it's a race where winning might mean building the most dangerous cyber weapon ever created.
What Happens Next: Three Scenarios
Experts are divided on what comes next, but three scenarios dominate the conversation:
Scenario 1: The Arms Race Escalates (Most Likely)
Defensive AI vs. Offensive AI becomes the new normal. Banks, hospitals, and critical infrastructure deploy AI to continuously scan for vulnerabilities and patch them automatically. Attackers use AI to find new ones. The cycle accelerates until only organizations with AI defenses can survive. Everyone else becomes collateral damage.
Scenario 2: The Vulnpocalypse Hits
A coordinated attack exploiting AI-discovered vulnerabilities causes cascading failures across critical infrastructure. Think CrowdStrike, but intentional and 100x worse. Hospitals go offline. Payment systems freeze. Water treatment plants fail. The internet fragments as regions disconnect to prevent contagion.
Scenario 3: Regulatory Crackdown
Governments impose strict export controls and usage restrictions on advanced AI models. The technology becomes classified. Companies need security clearances to develop or deploy frontier AI. Innovation slows, but so does proliferation.
What You Can Do Right Now
While the macro picture is terrifying, individual action isn't futile:
- Monitor your accounts. Check bank statements, credit reports, and unusual activity obsessively.
The Bottom Line
We are witnessing a fundamental shift in the cybersecurity landscape. AI models that can autonomously discover and exploit vulnerabilities represent a capability previously reserved for nation-states, now potentially available to anyone with an internet connection.
The Federal Reserve doesn't convene emergency meetings over hype. The Bank of England doesn't brief every major bank because of speculative concerns. Something real and terrifying has emerged from the labs of Silicon Valley.
The question isn't whether the Vulnpocalypse will happen. It's whether you'll be ready when it does.