The Rip Current with Jacob Ward

The Rip Current with Jacob Ward

Anthropic Built a Door Into the Pentagon. This Week, It Got Slammed In the Company's Face.

How five days turned a $200 million defense contract into the biggest test of AI industry principles since the technology went mainstream — and why the company's own strategy may have set this off.

Jacob Ward's avatar
Jacob Ward
Feb 27, 2026
∙ Paid

The week that just unfolded between Anthropic and the Pentagon is, in miniature, the story of every tension in the AI industry right now: a company that seems to genuinely believe in safety guardrails discovering that imposing them ahead of deployment — as critics like myself have argued they should — may in fact get them kicked out of the market.

On Friday afternoon, President Trump posted on Truth Social that he was directing every federal agency to stop using Anthropic's technology immediately. He gave the Pentagon six months to phase Claude out of classified systems where it was, until today, the most deeply embedded AI model in American defense and intelligence. He threatened the company with unspecified criminal and civil consequences if it didn’t cooperate during the transition.

The ban capped a five-day escalation that began with a policy document and ended with the President of the United States calling an American AI company “leftwing nut jobs” for refusing to let its technology be used for mass domestic surveillance or fully autonomous weapons.

Here’s how the week played out, and why it matters far beyond this one company.


Monday, February 24: The Policy Update

Anthropic released the third version of its Responsible Scaling Policy, the voluntary framework the company uses to manage catastrophic risks from AI systems. The update had reportedly been in the works, but its release landed in the middle of an already-simmering standoff with the Pentagon over the terms of Anthropic's $200 million Pentagon contract.

The key change in RSP v3 was structural. Anthropic separated its own commitments — what it could realistically do as a single company — from its recommendations for the broader industry. The document acknowledged, in unusually candid terms, that some safety measures at higher AI capability levels would be difficult or impossible to implement without collective action from governments and other AI developers.

That honesty now reads as foreshadowing. The cooperation Anthropic described as necessary was, by the end of the week, clearly not something it was going to get from the U.S. government.

“Anthropic made its bed with Palantir,” says Juan Sebastian Pinto, a former Palantir employee who worked inside the FedStart program that helped bring Anthropic, OpenAI, and Google into sensitive government contracts in 2025. “To me it has participated like a lot of other companies in kind of occluding this main issue in front of it.”

[🔒 THE REST OF THIS POST IS FOR PAID SUBSCRIBERS]

If you want the full picture of how Anthropic ended up here — including reporting from inside Palantir’s FedStart program and the terrifying uses of AI that the company’s own red lines don’t answer — subscribe now.

User's avatar

Continue reading this post for free, courtesy of Jacob Ward.

Or purchase a paid subscription.
© 2026 Jacob Ward · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture