Why the Pentagon Can't Quit Anthropic
The military is suing Anthropic in federal court over national security. Its own intelligence agency is deploying Anthropic's most powerful model right now. This is dependency, after only a year.
By 1991, once the Gulf War ended, the U.S. military was in love. The object of its affection was GPS — precision-guided bombs, real-time troop positioning, navigation for ships and aircraft that required almost no specialized training to use. The technology was so reliable that the military wove it into everything: weapons systems, supply chains, battlefield communications. The older skills — map and compass, celestial navigation, calculating your position from speed and direction over time — were quietly allowed to disappear. Why train soldiers in something they would never need again?
Then came Ukraine. Russia deployed jamming systems that blacked out GPS signals across front-line areas stretching three hundred kilometers wide. Western precision bombs, the ones that need GPS to hit their targets, started missing. Ukrainian troops in some sectors navigated by pre-war paper maps. A February 2026 analysis in War on the Rocks concluded that any army operating today should simply assume GPS will be disrupted. The Government Accountability Office, Congress’s watchdog agency, has been tracking this problem for fifteen years. Its most recent major report, from 2024, found the military still hasn’t delivered a working jam-resistant GPS upgrade after more than two decades of trying and more than eight billion dollars spent.
The tool was too good not to use. And by the time the need to do without it arrived, the backup capability was gone.
The same dynamic is now visible in the military’s relationship with the AI company Anthropic — and this one is playing out in real time, in federal court.


