The Rip Current with Jacob Ward

The Rip Current with Jacob Ward

"Safety Theater"

The Anthropic CEO's leaked memo reveals exactly the restrictions the Pentagon wanted deleted. Here's what that contested language tells us about who gets to build the military's AI now — and why.

Jacob Ward's avatar
Jacob Ward
Mar 05, 2026
∙ Paid

Last Friday at 5:01 PM, the Department of War designated Anthropic — the company that makes Claude — a supply-chain risk to national security. Minutes before that, Donald Trump posted on Truth Social that Anthropic was “A RADICAL LEFT, WOKE COMPANY” and ordered every federal agency to stop using its technology. Hours later, OpenAI announced it had signed the contract Anthropic had just refused.

That summarizes what I knew as of a few hours ago. But here’s what leaked just now.

Dario Amodei, Anthropic’s CEO, wrote a 1,600-word memo to his employees explaining what he says actually happened in the negotiation room. The Information obtained the full document. TechCrunch, Axios, and Sherwood News confirmed key excerpts. What Amodei described tells us something vital about what the White House wanted, what Anthropic refused to provide, and what the Department of War now has from OpenAI.

The phrase

Anthropic had been negotiating a Pentagon contract worth roughly $200 million. The negotiation had gone on for weeks. By the end, Amodei says, the Defense Department had agreed to accept nearly all of Anthropic’s terms — safety restrictions, use-case limitations, the works. Except for five words.

The Pentagon asked Anthropic to delete a contractual prohibition on “analysis of bulk acquired data.” That phrase, Amodei wrote, was “the single line in the contract that exactly matched this scenario we were most worried about” — an AI system trained on aggregated American communications data for domestic surveillance at scale.

Anthropic said no. Defense Secretary Pete Hegseth gave Amodei a 5:01 PM Friday deadline to drop its demand. Anthropic held firm. The Pentagon declared them a security threat.

The replacement

On Thursday, the day before Anthropic was blacklisted, Altman wrote in a memo to OpenAI staff that his company shared Anthropic's position: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." He told CNBC he was working on a deal that would adhere to the same safety standards, and told employees that "this is a case where it's important to me that we do the right thing, not the easy thing that looks strong but is disingenuous."

Then, on the same day Anthropic’s negotiations fell apart, OpenAI stepped in. The contract it accepted uses “all lawful purposes” as the governing standard — meaning the Pentagon can do anything with OpenAI’s technology that existing U.S. law permits. But experts questioned the idea that OpenAI had come up with a way to match Anthropic’s ethical standards, as Altman had claimed, or that the deal even meant it could. Jessica Tillipman, associate dean for government procurement law studies at George Washington University, reviewed OpenAI’s published contract language and concluded that it does not give the company a free-standing right to prohibit otherwise-lawful government use.

OpenAI’s Sam Altman and Anthropic’s Dario Amodei notably “refused to hold hands” during an AI summit photo op, as Fortune’s reporter put it, and today’s memo from Amodei only deepens the pool of evidence that the rival AI CEOs dislike one another. (Credit: Fortune)

The "all lawful purposes" standard also drew a pointed response from Congress. Representative Sam Liccardo, whose district includes San José and Silicon Valley, introduced an amendment to the Defense Production Act prohibiting the Pentagon from retaliating against tech companies that institute safety guardrails. In a speech before the committee vote, Liccardo laid out the core problem with the Pentagon's assurance that it would simply "follow the law."

"There is only one problem with the Pentagon's approach," he said. "There is no law. The law is years behind the technology."

And as MIT Technology Review noted, an assumption that federal agencies won’t break the law is thin comfort to anyone who remembers that the surveillance practices Edward Snowden exposed had been deemed legal (in secret) by the agencies running them.

In his internal memo, Amodei was not diplomatic about his rival. He called OpenAI’s safeguards “safety theater.” He said they “mostly do not work.” He described OpenAI’s public messaging as “mendacious” and “straight up lies,” and said Sam Altman was falsely “presenting himself as a peacemaker and dealmaker.”

The inputs

The most revealing section of the memo is Amodei’s explanation for why the Trump administration came after Anthropic specifically.

“The real reasons DoW and the Trump admin do not like us,” he wrote, “is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot).”

Greg Brockman, OpenAI’s president, and his wife gave $25 million to the MAGA Inc super PAC. Altman personally donated a million dollars to Trump’s inauguration. At the Stargate announcement in January, Altman stood beside the president and said: “For AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn’t be able to do this without you, Mr. President.”

Amodei called that “dictator-style praise.”

He also told his employees — in a line that had zero chance of staying internal — that OpenAI’s spin was “working on some Twitter morons, which doesn’t matter.”

User's avatar

Continue reading this post for free, courtesy of Jacob Ward.

Or purchase a paid subscription.
© 2026 Jacob Ward · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture