The White House strategy for national cyberdefense is like fighting a land war in Asia. The assumption that deterrence works against the real adversary we will fight is dead wrong — not because our diplomats aren’t clever enough, but because the real enemy can’t be reasoned with, can’t be bargained with, doesn’t feel pain or fear, and absolutely will not stop until it wins (Kyle Reese called it).
Ten days after the White House released its cybersecurity strategy, the State Department announced the Bureau of Emerging Threats, a correct directional move. But the underlying doctrine is still trapped in a 20th century frame. They’ve identified the arsonist but haven’t prepared for how the fire spreads.
The strategic logic that must reach every policymaker is this: a bad actor can be stopped, sanctioned, or killed. The replicating agents he unleashed keep going and going and going.
We saw the precursor in the 12 Day War of 2025. When the Iranian AI researcher was targeted it wasn’t for his limited expertise then — it was for the researcher he would have become in eight months. The urgency that drove that conflict had everything to do with an adversary getting smart on cyber, the “we had to go in” urgency is in part, nobody thought to download Chinese models and ask them to check if the cameras were hacked. But soon pre-emptive attacks on human nodes won’t solve the problem, it could even trigger a dead-man switch release. Software doesn’t stand down when its operator is killed.
Without sweeping physical regulation of our power grid and critical infrastructure — the nerves of our water, energy, and communications systems — we risk paralysis. The White House strategy gestures at firming up physical infrastructure while simultaneously pushing deregulation so the private sector can move faster. These aren’t necessarily contradictory, but the balance is wrong. We are on an offense-favored battlefield and our most critical systems remain soft targets.
The good news is that much defensive capacity isn’t waiting on Washington. We are in a moment of exploding capability for efficient AI models running on $500 graphics cards that tens of millions of Americans already own. Loose regulation to let private sector solutions move faster does make sense for Windows 12, which will have an AI-first design and be adaptive to solutions like a small AI anti-virus I was working on for the Apart Research Alignment Hackathon over the weekend. If we can detect tampering in your local model’s behavior and raise an alarm, we can rebalance defense, and formal regulation is certainly too slow and untechnical to serve that. This is the convergence of the First and Second Amendments — software as speech, knowledge as a defensive weapon. We don’t need to surrender our data to Big Tech logs to be secure. We need to own our own defense.
You may have heard AI safety people talking about alignment science, we need aligned AI, but aligning AI doesn’t mean it’s moral, it just means it obeys some humans or their ideology. Anthropic released a very powerful new model it’s only releasing to security professionals under Project Glasswing. The model figured out how to escape a server with limited internet connection and sent the researcher an email while he was having a sandwich in the park, but it was loyal to the company commands to do so. You can see why the DoW would be concerned about loyalties of AIs.
Alignment doesn’t reduce the risk of AI warfare — it industrializes it. A model that reliably holds your values is a model that reliably attacks theirs. The more consistent and precise we make our models, the more precisely they can execute political operations, and war is politics by other means. Alignment is a safety precaution against rogue AI, but that’s only rogue relative to their makers!
This is actually why a treaty framework becomes possible. No trust, but we must: rogue AI is the common enemy, aligned AI is mutually assured destruction. That shared exposure is the only honest foundation for an agreement.
To meet this moment we need a national priority on the scale of a dedicated DARPA unit with $50 billion a year — spent buying data centers and standing up ControlArena-style testing environments where we prove a model’s safety before it ever touches critical infrastructure. The UK’s AISI does that work and shows us the value to allocating more to safety, control and alignment research.
Doctrine evolved from fourth generation guerilla warfare through the smoke-and-mirrors of fifth generation conflict, we are now entering sixth generation warfare where AI versus AI eclipses every other battlefield. The State Department’s new bureau is a start. But a bureau without doctrine is just an org chart. We need to reimagine the adversary entirely — and build the shield before the fire spreads past the point of containment.
Patrick Dugan is an independent AI researcher and founder of MoralityLab.
The views and opinions expressed in this commentary are those of the author and do not reflect the official position of the Daily Caller News Foundation.
(Featured Image Media Credit: Screen Capture/PBS)
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].















Continue with Google