The War Machine Is Now Off-the-Shelf
Iran is firing drones we reverse-engineered from Iran. AI is running targeting even after it was banned. Autonomous warfare isn't a dark possibility — it's already a supply chain.
On November 27, 2020, Mohsen Fakhrizadeh, architect of Iran’s nuclear weapons program, was driving his wife to their country home on a quiet road east of Tehran. His security detail followed in separate vehicles. He was in an unarmored Nissan sedan, on a route he had taken many times.
Hidden under a tarp in a pickup truck, a Belgian-made FN MAG machine gun looked over the road. Mounted on a robotic apparatus, the whole assembly weighed roughly a ton. It had been smuggled into Iran in pieces, then reassembled. The gun was equipped with AI-assisted targeting, multiple cameras, and satellite uplink. It could fire 600 rounds per minute. No one operated it in Iran that day. The person looking down the sights was more than a thousand miles away.
The AI compensated for the 1.6-second satellite delay between trigger pull and gunfire. It adjusted for the weapon’s recoil and the speed of Fakhrizadeh’s car. Facial recognition software was meant to ensure that only the scientist would be hit — his wife, sitting inches away, was not a target.
A hail of fifteen bullets killed Fakhrizadeh in less than a minute.
The truck containing the gun then exploded — a self-destruction meant to eliminate the evidence. It failed. Iranian investigators recovered enough of the robotic system to reconstruct exactly what had happened, and who built it. The wreckage left on that road was, clearly, the future of war.
Last Week at The Rip Current: The Standoff
There’s a moment in beats like mine when the abstract becomes operational. Last week I said February felt like one of those moments — the courts, the platforms, and the Pentagon all moving toward a reckoning at once. But I didn’t expect it to arrive like this.
Anthropic released the third version of its Responsible Scaling Policy — the company’s voluntary framework for managing catastrophic AI risk — and it landed in the middle of an already-simmering standoff over its $200 million Pentagon contract. The Defense Department had demanded that contractors allow their AI systems to be used for “any lawful use.” Anthropic’s CEO Dario Amodei refused to strip the safeguards preventing Claude from being deployed for mass domestic surveillance or fully autonomous weapons targeting. “We cannot in good conscience accede to their request,” he wrote.
The Pentagon’s response was direct. Defense Secretary Pete Hegseth gave Anthropic a deadline. They refused. On Friday, Trump posted on Truth Social calling Anthropic “leftwing nut jobs” and ordered every federal agency to immediately cease using the company’s technology. Hegseth designated Anthropic a supply chain risk to national security — a label previously reserved for foreign adversaries.
Then, OpenAI’s Sam Altman — a former colleague of Amodei’s, and now his fierce rival — announced a Pentagon deal Friday night to replace Anthropic’s place in that contract. He claimed it included the same guardrails Anthropic had sought. He later wrote on X that he had misgivings about the government’s actions: “I think it is an extremely scary precedent, and I wish they handled it a different way.”
Anthropic held the line, got kicked out, and OpenAI walked in the same night. You can read about the standoff in last week’s coverage.
Here is where this story becomes something nobody fully anticipated.
Hours after Trump ordered federal agencies to halt use of Anthropic’s AI tools, U.S. Central Command used Claude — Anthropic’s flagship AI — for intelligence assessments, target identification, and simulating battle scenarios during Operation Epic Fury, the joint U.S.-Israeli strikes on Iran.
The same AI the president had just declared a national security threat was running live targeting support as American jets flew toward Iranian airfields, according to the Wall Street Journal.
Why? Because Claude is embedded in systems built by Palantir that run across CENTCOM’s classified networks. Experts told reporters that separating the military from Claude would amount to “open-heart surgery.” The transition is expected to take at least six months — during which time the tool the president banned will keep doing what it was doing.
This is the argument I made in The Loop: once a system is embedded deeply enough in the infrastructure of decisions, the humans nominally in charge lose real control. The president of the United States banned a piece of software. The software kept running a war.
They Told Us
Experts have been warning us this was coming for decades.
Peter Singer — the political scientist whose 2009 book Wired for War remains the definitive account of how robotics reshapes conflict — laid out the underlying dynamic more than a decade ago in a podcast episode with me, and in interviews like this one with the International Red Cross. Revolutionary technologies, he argued, force new questions upon us that the previous generation never imagined asking. His particular concern was political: when technology allows democracies to carry out acts we would previously have called war, without the political cost of putting a son or daughter in harm’s way, the barriers to conflict don’t just lower. They disappear. And the humanitarian community, Singer warned, was already behind — reacting after the fact to technologies that were already in use, guaranteeing that its influence would arrive too late.
Singer was right. And he wasn’t alone.
Stuart Russell — the Berkeley AI scientist who co-wrote the standard textbook on artificial intelligence — made a different but equally clarifying argument in a 2016 talk that led directly to the short film Slaughterbots, screened at the United Nations the following year. His warning was about physics, not politics. Autonomous weapons are intrinsically scalable, Russell argued, because they don’t require a human to find the target or pilot the weapon toward it. That means you can launch very large swarms of lethal machines — effectively creating a new category of weapon of mass destruction, assembled from cheap individual units.
When Russell screened Slaughterbots at the UN, the Russian ambassador dismissed it as science fiction. Three weeks later, Turkey announced the Kargu drone: fully autonomous targeting of human beings using facial recognition.
Singer was warning about what happens to democracy. Russell was warning about what happens to physics. Neither was wrong. Both were largely ignored.
The Battlefield
Ukraine proved the thesis at industrial scale.
Since Russia’s full-scale invasion in February 2022, cheap drones and a rapidly expanding roster of unmanned systems have collectively redefined modern warfare. Ukraine now produces up to four million drones annually. Monthly FPV drone output jumped from roughly 20,000 units in 2024 to 200,000 per month in 2025. The country established the world’s first dedicated military branch for unmanned systems.
The most dramatic proof of concept: Operation Spiderweb, June 2025. Ukraine’s Security Service secretly transported 117 FPV drones inside cargo trucks deep into Russian territory. They opened the roofs remotely and launched simultaneous swarms at four airbases, damaging 41 aircraft including strategic bombers. The operation was planned over 18 months and executed entirely from inside enemy territory. No manned aircraft. No soldiers in the field. Just cheap machines launched from hiding.
Ukraine proved that cheap, autonomous, mass-produced systems could hold off a superpower.
Iran took notes. So did we.
Iran’s state-affiliated FARS News Agency released footage this week of an underground drone arsenal — rows of drones stored in tunnels, mounted on rocket launchers, Iranian flags on the walls. Propaganda, yes. Also a genuine inventory statement: we have them, many of them, underground.
And now the United States has its own version — reverse-engineered from Iran itself.
CENTCOM confirmed that Operation Epic Fury included the first combat use of the LUCAS drone — the Low-cost Unmanned Combat Attack System. LUCAS is a one-way attack drone reverse-engineered from the Iranian Shahed-136. Built by Arizona-based SpektreWorks, it carries a range of roughly 500 miles and a payload that defense analysts estimate delivers roughly twice the explosive yield of a Hellfire missile. Cost per unit: approximately $35,000.
This is what a commodity arms race looks like. One side invents a cheap weapon. The other side copies it. Within a few years both sides are launching the same $35,000 machine at each other. No treaty covers it. No export control stopped it. No international body signed off on this new category of warfare. It just spread — exactly as Singer and Russell said it would.
The Fiction of the Red Line
Which brings us back to Anthropic — and why its stand was more complicated than it appeared.
Amodei framed the company’s position as a line against future autonomous weapons. Systems that don’t yet exist in deployed form. But they already exist. They are defending American bases right now.
The C-RAM — Counter Rocket, Artillery, and Mortar system — is a radar-controlled, computer-directed rapid-fire gun that autonomously detects, tracks, and destroys incoming threats. Here’s what it looks like operating. C-RAM systems act autonomously, with a short decision window and a high degree of system automation. They have been specifically developed and positioned to counter incoming missiles and drones.
When Iranian missiles and drones began striking U.S. bases across Bahrain, Kuwait, Qatar, Jordan, and the UAE in retaliation for Epic Fury, the systems shooting back were doing so on their own. No human can pull a trigger fast enough to intercept an incoming ballistic missile. So the machine decides. Raytheon sells them. Countries buy them. At this point, they’re about as exotic as a radar dish.
The debate about autonomous weapons has always assumed they were something to be prevented. What this week reminded us is that they are something to be purchased. The only question now is who has the better supply chain — and who is writing the software that runs it.
So what was the Anthropic standoff about? The Pentagon’s demand was not hypothetical. Amodei’s refusal was not paranoid. The fight was over who writes the rules for systems that are already running — and that fight is now publicly unresolved, with OpenAI holding the contract and the terms classified.
The weapons are already in use, from a robot gun on a country road in Iran to an AI the president banned running targeting for a war to autonomous defense systems firing without human input across the Gulf to a $35,000 drone based on a weapon our adversary invented being used against that adversary at scale.
Who decides? The answer this week was: machines already embedded in the system. Who profits? The defense contractors who can manufacture at volume. Who pays? That question is being answered in real time, across 24 of Iran’s 31 provinces.
This week’s paid analysis will go deep on what AI is actually doing in those targeting pipelines, what Palantir’s FedStart program built, and what “the same guardrails Anthropic sought” actually means inside a classified DoD contract nobody outside the Pentagon can read. Become a paid subscriber and get it 48 hours before it goes anywhere else.
Jacob Ward has covered technology and human behavior for CNN, NBC News, PBS, and Al Jazeera. He is the author of The Loop: How AI Is Creating a World Without Choices and How to Fight Back.



Wow. And, Pete Hegseth is SecWar… 😵💫