Who Gets to Say No?
The Trump administration is punishing an AI company and a group of researchers for the same thing: working to impose limits on tech.
Two Lawsuits, One Morning
Two lawsuits landed Monday. Anthropic, the San Francisco AI lab that built Claude, sued the Department of Defense after being designated a “supply chain risk” — a classification normally reserved for Huawei and other foreign adversaries. And the Coalition for Independent Technology Research, represented by Columbia’s Knight First Amendment Institute and Protect Democracy, sued the State Department over a visa policy that has barred European academics from entering the United States for studying whether Facebook removes hate speech.
One lawsuit targets a $61 billion AI company. The other targets a handful of professors. The weapon is different in each case — procurement law in one, immigration law in the other — but the underlying offense is the same. Both refused to accept the premise that powerful technology should operate without limits.
The Tide That Was Turning
For roughly a decade, the political and cultural current in America ran in one direction: against Big Tech. Cambridge Analytica blew open the myth that platforms were neutral town squares. Algorithmic addiction became a household concept. State attorneys general, school districts, and parents filed wave after wave of social media lawsuits — and the legal infrastructure behind those suits was actually gaining traction. In January 2026, the litigation hit a new peak, with coordinated actions from dozens of states targeting Meta, Snap, and TikTok.
Overseas, the European Union became the world’s de facto technology regulator. The GDPR, the European Union's comprehensive data privacy law, forced American companies to adhere to global standards. The Digital Services Act created enforceable content moderation laws. And the researchers who informed those regulations — scholars studying platform design, disinformation flows, algorithmic amplification — became essential infrastructure in the accountability machine.
For a brief window, the accountability infrastructure had corporate allies, academic rigor, and legal momentum all moving in the same direction.
Then AI tested the limits all over again. The generative AI boom that began in late 2022 was built, in significant part, on other people's work. OpenAI scraped millions of copyrighted articles to train ChatGPT — the New York Times sued, and a federal judge let the case proceed. Anthropic downloaded over seven million books from pirate websites to train Claude, then settled for $1.5 billion — the largest copyright recovery in U.S. history. Music publishers hit Anthropic with another $3.1 billion suit in January. By early 2026, more than 70 copyright infringement lawsuits had been filed against AI companies — and that’s not even counting the suits that accuse the AI companies of doing psychological damage to young people.
The Reversal
Almost immediately after taking office, the Trump administration reversed the tech-accountability current. The method is consistent and worth naming precisely: the administration does not suppress speech directly. It makes speaking costly.
The mechanism: find an institutional leverage point — a federal contract, a visa, an accreditation, a funding stream — and pull it. The target doesn’t need to be silenced. They just need to understand that maintaining a public ethics position comes with a price.
That’s what makes the story of Anthropic’s standoff with the Pentagon so notable. It was the rarest thing in the AI industry: a major company arguing publicly for limits on its own product. Anthropic’s CEO, Dario Amodei, told the Pentagon and the public that certain applications of advanced AI — autonomous weapons, mass surveillance of Americans — represented unacceptable risks. A company with a $61 billion valuation was voluntarily narrowing its own market.
Anthropic refused to strip safety guardrails from Claude. Soon after, it lost access to the defense market and was branded a national security threat. OpenAI read the room within hours and cut a deal with the Pentagon. The signal sent to every other AI company, every university researcher, every would-be regulator was unmistakable: compliance is rewarded, conscience is punished.




