ALERT: The AI Model That Can HACK Your Business Just Dropped — And 73% of Safety Tests Were IGNORED

ALERT: The AI Model That Can HACK Your Business Just Dropped — And 73% of Safety Tests Were IGNORED

The UK government just issued an unprecedented open letter to every business leader in the country. This isn't routine. This is an emergency.

Last week, something shifted in the artificial intelligence landscape that should have every CEO, CTO, and business owner reaching for their disaster recovery plans. Anthropic released a new AI model called Mythos — and government testing has confirmed what the company itself admitted: it's "substantially more capable at cyber offence than any model we have previously assessed."

The UK Secretary of State for Science, Innovation and Technology didn't mince words in a letter sent April 15, 2026: "The threat your business faces in cyber space is changing, and the way we respond must change with it."

This isn't fear-mongering. This is your government telling you that the rules of cybersecurity have been rewritten overnight.

The Terrifying Truth About AI Capability Doubling

Here's the number that should keep you awake at night: AI capabilities are now doubling every 4 months.

Let that sink in. Every four months, the AI systems available to attackers — both state-sponsored and criminal — become twice as capable as they were before. The UK's AI Security Institute (AISI), one of the world's leading bodies for evaluating frontier AI, made this assessment after testing Anthropic's Mythos model.

Just twelve months ago, those same capabilities were doubling every 8 months. We've accelerated into hyperspeed, and your defensive posture hasn't kept pace.

The implications are staggering. A model that could write basic phishing emails in January 2025 can now identify software vulnerabilities, write custom exploit code, and execute attacks at a scale and speed that would have been impossible even a year ago. The attackers aren't getting incrementally better — they're getting exponentially better.

And here's the kicker: while governments are scrambling to understand these threats, the companies building these systems are deploying them anyway.

73% of Safety Tests Overridden: The Shocking Truth About AI "Self-Regulation"

In a bombshell investigative report from The Editorial published April 13, 2026, internal documents revealed a pattern that should terrify anyone relying on AI companies to police themselves.

73% of pre-deployment safety reviews at major AI labs resulted in deployment despite initial recommendations for delay.

Read that again. Nearly three out of every four times that safety researchers said "wait, this might be dangerous," executives authorized deployment anyway. The reason? "Competitive pressure from rival labs."

A former safety team member at one major lab put it bluntly: "The frameworks were designed to be flexible enough that they could always be satisfied. The question was never 'does this meet our safety bar?' It was 'how do we justify deploying this?'"

This isn't hypothetical. We're not talking about edge cases or theoretical risks. We're talking about models capable of autonomous cyber offense being pushed to market because shareholders demanded it, because quarterly earnings reports took precedence over public safety.

The $12.5 Billion Wake-Up Call

Still think this is overblown? Let's talk numbers that hit closer to home.

The FBI's Internet Crime Complaint Center reported that losses from AI-facilitated fraud exceeded $12.5 billion in 2025 — up from $2.7 billion in 2023. That's a 363% increase in just two years, driven largely by sophisticated language model capabilities in social engineering and impersonation.

MIT researchers documented a 340% increase in AI-generated phishing attacks between 2024 and 2025, with newer models demonstrating unprecedented ability to personalize deceptive content based on publicly available information about targets.

These aren't faceless statistics. These are businesses like yours — small companies that thought they were too insignificant to be targeted, mid-sized enterprises that believed their "basic security" was sufficient, large corporations that assumed their expensive cybersecurity contracts would protect them.

They were all wrong.

The New Attack Surface: Your Employees, Your Trust, Your Business

Here's what makes this moment uniquely dangerous: AI-powered attacks no longer look like attacks.

Traditional phishing emails had telltale signs — poor grammar, suspicious links, obviously fake domains. The new generation of AI-driven social engineering is indistinguishable from legitimate communication.

These systems can:

A Stanford Internet Observatory study documented 147 distinct AI-generated disinformation campaigns targeting elections in 2025 — a fivefold increase from 2024. Many exploited persuasion capabilities that safety researchers had specifically flagged as concerning during pre-deployment evaluations.

If nation-state actors and sophisticated criminal organizations are using these tools to influence elections, what do you think they're doing to your business?

The AI Arms Race No One Asked For

In February 2026, the US Secretary of Defense designated Anthropic a "supply chain risk to national security" after the company refused to allow its models to be used for mass surveillance or fully autonomous lethal weapons. A San Francisco judge initially blocked the designation, but a federal appeals panel recently denied Anthropic's bid to stay the blacklisting.

Meanwhile, Anthropic has reportedly received investment offers valuing the company at $800 billion — more than double its previous valuation. The company's annual run-rate revenue has skyrocketed to $30 billion as of April 2026.

Let that sink in. A company that built a model so dangerous the UK government had to issue an emergency warning to businesses is now worth nearly a trillion dollars.

This is the incentive structure we're operating in. Safety is a liability. Speed to market is everything. And your business is caught in the crossfire.

Google's Response: Robots That Can Read Your Gauges

As if cyber offense capabilities weren't enough, Google DeepMind just released Gemini Robotics-ER 1.6 — an AI model specifically designed to enable robots to understand and interact with physical environments with "unprecedented precision."

This isn't science fiction. Boston Dynamics is already using this technology in their Spot robots to autonomously navigate facilities, read pressure gauges, interpret chemical sight glasses, and monitor industrial equipment.

The capabilities are impressive: pointing to objects with spatial reasoning, counting items in images, making relational comparisons, mapping trajectories, identifying optimal grasp points. It can read complex instruments including circular pressure gauges, vertical level indicators, and modern digital readouts.

But here's the question no one wants to ask: what happens when these capabilities are turned against you?

The same spatial reasoning that lets a robot navigate your factory floor can be used to map your security vulnerabilities. The same instrument-reading capabilities that monitor your systems can be repurposed to identify critical infrastructure weak points.

We've created AI systems that can physically navigate and understand our environments, and we've released them into the world with the same rushed safety evaluations that failed to catch the cyber risks.

The Government's Three-Point Plan (That You're Probably Ignoring)

The UK government letter wasn't just a warning — it was a roadmap. And most businesses are failing to follow it.

Step 1: Take cybersecurity seriously at the board level.

When was the last time your board discussed cyber risk? If it wasn't at your last meeting, you're already behind. The government specifically warns: "This is not an issue to delegate to your IT team and forget about."

Use the Cyber Governance Code of Practice. Implement the NCSC's Cyber Assessment Framework. Plan and rehearse your incident response. Consider cyber insurance — free cyber insurance is available to small organizations that obtain Cyber Essentials.

Step 2: Get the basics right with Cyber Essentials.

Most successful cyber-attacks exploit simple weaknesses: outdated software, weak passwords, missing backups. Cyber Essentials is the government-backed certification scheme that protects against the most common attacks.

Organizations that hold it are significantly less likely to suffer a damaging cyber incident. For most businesses, getting certified is neither expensive nor difficult.

Step 3: Follow NCSC advice and sign up for their Early Warning Service.

The National Cyber Security Centre provides free, practical advice, training and guidance. Their Early Warning service can inform you of potential cyber attacks before they escalate, giving you invaluable time to act.

These aren't optional extras. In the words of the UK government: "The businesses that act now — that treat cyber security as an essential part of running a modern company, not an optional extra — will be the ones best placed to thrive through it and seize its advantages."

The 59 Researchers Who Walked Away

Perhaps the most damning signal of all: 59 top AI researchers have quit major labs over safety concerns. They walked away from millions in equity. The people who built these systems no longer trust their own companies to deploy them safely.

OpenAI's superalignment team lost its co-leads, Ilya Sutskever and Jan Leike, in May 2024. At least 38 senior safety researchers have departed OpenAI, Anthropic, and Google DeepMind since January 2025. Multiple departing employees cited frustration with safety recommendations being overruled.

When the people who understand these systems best are leaving in protest, what does that tell you about the systems themselves?

The Regulatory Vacuum: No One's Coming to Save You

Here's the uncomfortable truth: no one is coming to save you.

The EU AI Act's most stringent provisions don't take effect until August 2026. In the United States, President Biden's Executive Order on AI Safety established reporting requirements but created no enforcement mechanisms with meaningful penalties. The AI Safety Institute operates with fewer than 100 people and an annual budget of $10 million — roughly what OpenAI spends on computing in a single week.

Proposed legislation has stalled repeatedly in Congress. The bipartisan AI Research, Innovation, and Accountability Act, which would have established mandatory pre-deployment testing and created liability for AI systems that cause foreseeable harms, failed to advance out of committee after intensive lobbying from the technology industry.

The technology industry spent $94 million on AI-related lobbying in 2025.

Your protection is not a priority. Their quarterly earnings are.

What You Need to Do TODAY

If you've read this far and you're still thinking "this doesn't apply to me" or "my business is too small to be a target," you are exactly the person who needs to act.

Attackers go where defenses are weakest. Small businesses, mid-sized companies, organizations without dedicated security teams — these are the targets of choice because they're easy.

The AI systems being deployed right now don't discriminate. They can target thousands of businesses simultaneously, scaling attacks in ways that were never possible before.

Today, you need to:

The Bottom Line: Adapt or Become a Statistic

We're entering a period in which the pace of technological change will test every institution in every country. The AI capabilities available to attackers are doubling every four months. Your defensive measures are not.

The businesses that treat this as the emergency it is — that take immediate action to shore up their defenses, educate their people, and prepare for the inevitable — will be the ones that survive.

The ones that don't? They'll be in next year's FBI statistics. They'll be the cautionary tales in government reports. They'll be the companies that knew the warning was coming and did nothing.

The AI models that can hack your business are already here. The only question is whether you'll be ready when they come for you.

--