THE GREAT DECEPTION: OpenAI Just Quietly Downgraded 'Mass Manipulation' as a Critical Risk—And You're About to See Why That's Terrifying

THE GREAT DECEPTION: OpenAI Just Quietly Downgraded 'Mass Manipulation' as a Critical Risk—And You're About to See Why That's Terrifying

The Alarming Truth They Don't Want You to Know About AI Persuasion and the Coming Wave of Automated Deception

--

Here's what makes OpenAI's decision so dangerous: we already know how persuasive these systems are.

Studies from Cornell University and MIT have demonstrated that conversations with AI chatbots can be remarkably effective at changing minds—about anything. In one experiment, researchers found that dialogues with AI were able to get people to question their deeply-held conspiracy theories. Think about what that means: if AI can convince people to abandon beliefs they hold about shadowy government plots and hidden truths, what else can it convince them of?

The answer should terrify you: absolutely anything.

We already live in a world drowning in misinformation. Social media algorithms have created filter bubbles that radicalize users and polarize societies. Now imagine those same algorithms, but powered by systems that can:

That's not science fiction. That's what OpenAI is building. And they've just signaled they don't think that's a "critical risk."

--

Let's talk about timing. OpenAI made this change in April 2025—a period when democratic institutions worldwide are facing unprecedented challenges:

In this environment, OpenAI has decided that the ability of its systems to manipulate and deceive at scale is no longer worth rigorous pre-deployment evaluation. The timing could not be worse.

Oren Etzioni, former CEO of the Allen Institute for AI, captured the danger perfectly: "Downgrading deception strikes me as a mistake given the increasing persuasive power of LLMs. One has to wonder whether OpenAI is simply focused on chasing revenues with minimal regard for societal impact."

That question—whether profit is trumping safety—has never been more urgent.

--

To understand why this matters, let's consider some concrete scenarios that OpenAI apparently no longer considers "critical risks":

Scenario 1: Election Manipulation at Scale

Imagine a bad actor using OpenAI's models to generate millions of personalized messages designed to suppress voting in specific demographic groups. The AI could craft messages specifically designed to discourage young voters, or convince elderly voters that their polling location has changed, or spread targeted disinformation about candidates that exploits psychological vulnerabilities unique to each voter. With OpenAI's models becoming increasingly persuasive, and with the company no longer rigorously testing these capabilities before deployment, we're entering an era where automated election manipulation could be devastatingly effective.

Scenario 2: Financial Fraud and Scams

AI systems can already convincingly mimic human conversation. Now imagine scammers using OpenAI's technology to conduct sophisticated social engineering attacks at scale. The AI could research targets, craft personalized approaches based on their social media activity, hold convincing conversations that build trust over time, and eventually extract money or sensitive information. The elderly, the vulnerable, and the unsuspecting would be particularly at risk—and OpenAI has decided this isn't a "critical risk."

Scenario 3: Radicalization and Recruitment

Extremist groups have always sought to recruit vulnerable individuals. AI dramatically scales that capability. A sophisticated AI system could identify at-risk individuals through their online behavior, engage them in seemingly supportive conversations, gradually introduce extremist ideas, and shepherd them toward radicalization—all while appearing to be a concerned friend. This isn't hypothetical; researchers have already documented how AI can be used for these purposes.

Scenario 4: Corporate Espionage and Sabotage

Imagine a competitor using AI to convince your employees to leak sensitive information. The AI could research individual employees, understand their grievances and motivations, and craft approaches that feel like genuine human connection but are actually calculated manipulation. Given the persuasive capabilities of modern AI, the success rate of such campaigns could be terrifyingly high.

--

OpenAI's decision isn't happening in a vacuum. It's part of a broader pattern across the AI industry that should concern everyone:

Steven Adler, a former OpenAI safety researcher who worked on the company's safety team, noted the concerning trend: "OpenAI is quietly reducing its safety commitments."

When even insiders are raising alarms, you know the situation is dire.

--

Based on OpenAI's actions and the trajectory of AI development, here's what I predict will happen over the next 12-24 months:

The Floodgates Open

Without rigorous pre-deployment testing for manipulation capabilities, expect to see AI-powered persuasion campaigns explode in both volume and sophistication. We're talking about personalized disinformation at a scale never before possible, targeting not just elections but consumer behavior, political beliefs, financial decisions, and social relationships.

The Trust Erosion Accelerates

As AI-generated manipulation becomes ubiquitous, trust in digital communication will collapse. Every message, every video, every "personal" interaction will be suspect. The social fabric that holds communities together—shared understanding, mutual trust, common knowledge—will fray further.

The Regulatory Scramble

Governments will eventually attempt to regulate AI persuasion, but they'll be years behind the technology. By the time legislation passes, the genie will be out of the bottle. And given the global nature of AI development, domestic regulation will have limited effectiveness.

The Arms Race Intensifies

As OpenAI noted in its framework, if one company releases high-risk systems without safeguards, others will follow. We're witnessing the beginning of a race to the bottom on AI safety, where competitive pressure overwhelms ethical considerations.

--

I reached out to OpenAI for comment on this article. They didn't respond. But here are the questions they should be forced to answer:

OpenAI has no good answers to these questions. And that should tell you everything you need to know.

--

If you're feeling alarmed reading this, good. You should be. But panic without action is useless. Here's what you can do:

Demand Transparency

Pressure your elected officials to demand transparency from AI companies about their safety practices. The decisions being made now will shape our collective future. We have a right to know what's happening.

Educate Yourself and Others

AI literacy is no longer optional. Learn how these systems work, understand their capabilities and limitations, and help those around you—especially vulnerable populations—understand the risks.

Support Regulation

Advocate for sensible AI regulation that requires rigorous safety testing before deployment, especially for systems with persuasion capabilities. The industry cannot be trusted to self-regulate.

Be Skeptical

In the age of AI-powered manipulation, skepticism is a survival skill. Question sources, verify information, and be aware that even seemingly personal interactions might be AI-generated.

--