The Rip Current with Jacob Ward

The Rip Current with Jacob Ward

Anthropic Made Something Too Dangerous to Release. So It Released It to 11 Companies.

Anthropic just released a potentially civilization-threatening AI tool to a private club of tech companies. The good news: We have a playbook for exactly this situation. The bad: This isn't it.

Jacob Ward's avatar
Jacob Ward
Apr 08, 2026
∙ Paid

In 2011, a virologist at Erasmus Medical Center in Rotterdam named Ron Fouchier made something terrifying.

Working with U.S. government funding, Fouchier and his colleagues tweaked the H5N1 bird flu virus — an influenza that primarily infects birds — in a way that made it spread more easily between ferrets, which are often used as proxies for humans in flu experiments. H5N1 is incredibly deadly: When two humans are infected with H5N1, one of them typically dies. But before 2011, it had never spread easily between humans. Fouchier had changed that. It took as few as five mutations. Then he wrote up his findings and submitted them for publication in the journal Science.

When word got out that Fouchier and a second scientist, Yoshihiro Kawaoka of the University of Wisconsin, were planning to publish papers detailing their experiments — making a blueprint available to the world — the world cried out. The New York Times ran an editorial titled “An Engineered Doomsday.” The National Science Advisory Board for Biosecurity called for a moratorium on publication. Congress questioned why the controversial manuscript was on the verge of publication in a major science journal before it was brought under review. Agencies were brought before committees to testify on their existing and future policies. During the NSABB meeting on March 28, 2012, the Obama administration issued a new federal policy for oversight of dual-use research of concern — formalizing a process for regular federal review of government-funded research with high-consequence pathogens, identifying dangerous work, and requiring mitigation measures before it could proceed. The controversy also prompted 39 leading flu researchers (including, to their credit, Fouchier and Kawaoka) to impose a voluntary moratorium on research designed to increase the transmissibility of H5N1 in mammals.

The government didn’t wait for the players to sort it out. It froze the work, rewrote the rules, and brought in independent review before anyone could proceed.

That was 13 years ago. Now, Anthropic has just done something with similar national security implications — and the U.S. government is barely involved.


person wearing black gas mask
Photo by Michael Förtsch on Unsplash

On Tuesday, Anthropic announced a new AI model called Claude Mythos Preview, released through a program it’s calling Project Glasswing. The company says Mythos is not ready for public launch because it is too effective at finding high-severity vulnerabilities in major operating systems and web browsers — making it a potential dream device for cybercriminals and foreign intelligence services. The model has the capabilities of an advanced security researcher and can find tens of thousands of vulnerabilities that even the most skilled human bug hunter would struggle to identify. In internal tests, it identified thousands of highly critical zero-day vulnerabilities in operating systems and browsers, uncovered very old and long-overlooked holes including a 27-year-old one in OpenBSD, and generated working exploits for previously unknown security flaws.

During testing, the model succeeded in breaking out of its own virtual sandbox and — in what Anthropic described as a “concerning and unasked-for effort to demonstrate its success” — posted details about its exploit to multiple hard-to-find but publicly accessible websites. A researcher found out about this when he received an unexpected email from the model while eating a sandwich in a park.

Rather than halt development and seek independent oversight, however, Anthropic formed a consortium. Of other companies. Project Glasswing gives access to Mythos to Anthropic and 11 partner organizations: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Anthropic is providing up to $100 million in usage credits to these companies to test the model. And those companies aren’t exactly framing this as a civilization-protecting effort. The language that Google is using to describe Mythos to a “select group of Google Cloud customers,” for instance, makes it sound like the exciting launch of a new fancy airline lounge, rather than a threat to modern life.

Claude Mythos Preview, Anthropic’s newest and most powerful model, is now available in Private Preview to a select group of Google Cloud customers, as part of Project Glasswing.

The availability of Claude Mythos Preview on Vertex AI underscores our commitment to offer our customers access to models from frontier AI labs. Combined with the enterprise-grade power of Vertex AI to build, scale, and govern AI applications and agents, this new general-purpose model offers high performance capabilities across a variety of use cases, with new focus on reducing cybersecurity risk.

For more information about this release, visit Anthropic’s blog.

Build with other Claude models — including Claude Opus 4.6 and Claude Sonnet 4.6—today on Vertex AI.


As for the federal government: Anthropic says it has been in “ongoing discussions” with the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation. Discussions. Approval? Review? Nope. Discussions, while the testing proceeds

The thing is, we’ve solved this runaway-experiment problem before.

User's avatar

Continue reading this post for free, courtesy of Jacob Ward.

Or purchase a paid subscription.
© 2026 Jacob Ward · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture