The Rip Current with Jacob Ward

The Rip Current with Jacob Ward

What's In the White House's AI Plan? Here's My Guess.

The White House has been a policy laundromat for AI industry interests, kneecapping the states and calling it competition with China. But will Congress go along with Trump's plan, or go its own way?

Jacob Ward's avatar
Jacob Ward
Mar 20, 2026
∙ Paid

The White House is expected to send Congress a legislative framework for federal AI regulation today, according to Axios. The White House hasn’t released the text, and it did not immediately respond to reporters’ requests for comment.

But we have a lot to go on. We have the executive order Trump signed in December, directing the Justice Department to sue states over their AI laws and conditioning federal broadband funding on compliance. We have a year's worth of failed attempts to kneecap state AI regulations — a 10-year moratorium stripped from the One Big Beautiful Bill in a 99-1 Senate vote, then preemption language blocked again from the National Defense Authorization Act just days before Trump reached for the executive order instead. We have a White House AI czar, David Sacks, who according to a New York Times investigation remains invested in 449 companies with AI products — and who secured not one but two ethics waivers allowing him to shape federal AI policy while holding those stakes, which a government ethics expert at Washington University described as "sham ethics waivers" that were "aimed at enabling Sacks to profit from his government position."

Read together, that record tells you a lot about what’s likely coming — and, more importantly, why state law is almost certain to remain the strictest AI regulation most Americans will see under this White House.

UPDATE: As this piece was being finalized, Sen. Marsha Blackburn released a discussion draft of the “TRUMP AMERICA AI Act,” a nearly 300-page bill that would place a duty of care on AI developers, sunset Section 230, and incorporate bipartisan legislation on child safety, creator copyright protections, and AI-related job reporting. It is, by a wide margin, the most substantive federal AI proposal to date. I’ll have a full analysis in a separate piece — but the draft’s existence already reshapes the landscape, and complicates the White House’s plans.

Pattern Recognition

To understand what the White House might recommend to Congress, start with what it’s already attempted, and who that would serve.

The first try was the One Big Beautiful Bill Act. The version of the bill passed by the House on May 22, 2025, would have placed a 10-year moratorium on any state enforcing any law or regulation affecting “artificial intelligence models,” “artificial intelligence systems,” or “automated decision systems.” The provision was championed by Sen. Ted Cruz and backed by an administration that had spent months arguing that state AI laws were strangling innovation.

Senators voted 99-1 (an incredible margin in this day and age) in an overnight session to remove it, adopting an amendment led by Sen. Marsha Blackburn of Tennessee, who had earlier broken with her party over the issue. Blackburn told Wired the provision “could allow Big Tech to continue to exploit kids, creators, and conservatives,” and warned that “until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.”

That was July. By November, the White House was trying again — this time pushing to attach AI preemption language to the National Defense Authorization Act. That effort drew bipartisan opposition from Florida Gov. Ron DeSantis, Arkansas Gov. Sarah Huckabee Sanders, and Sen. Josh Hawley, and was excluded from the final NDAA text unveiled December 7. House Majority Leader Steve Scalise conceded the language wouldn’t make it and said they were looking at “other places” for preemption provisions.

The Impact of President Trump’s Executive Actions
President Trump signing an executive order February 10, 2025. (Credit: Getty Images)

Four days after the NDAA text dropped, Trump signed the executive order. It directs the Attorney General to establish an AI Litigation Task Force to sue states over their AI laws, and threatens to block states with certain regulations from receiving Broadband Equity Access and Deployment funding — a massive federal grant program established by the 2021 Bipartisan Infrastructure Law to expand high-speed internet access in underserved communities.

The pattern is clear: preempt the states, by any vehicle available, at any cost.

Who’s Setting the Table

The framework coming to Congress this week will be shaped primarily by David Sacks, Trump’s AI czar, a venture capitalist whose financial network is deeply embedded in the AI industry he’s now tasked with regulating. The framework is expected to touch on what Sacks calls “the four C’s” — child safety, communities, creators, and censorship — according to Axios. That language tells you almost nothing about liability, labor, or accountability.

The day after Trump returned to office, Sam Altman stood behind the presidential seal at the White House, praising the president for the $500 billion "Stargate" AI infrastructure initiative — telling him, "For AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn't be able to do this without you, Mr. President." Altman had donated $1 million to Trump's inaugural committee weeks earlier (as had several other tech leaders betting big on AI) and attended the swearing-in ceremony at the Capitol. The relationship between this White House and the AI industry it's writing rules for is not a secret.

Why A.I. Tycoons Cannot Put America First

Why A.I. Tycoons Cannot Put America First

Jacob Ward
·
January 24, 2025
Read full story

What that relationship looks like when a corporation tries to impose ethical guidelines of its own is a more recent data point. Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to comply with demands to lift restrictions on its AI model Claude — restrictions against use in mass domestic surveillance and autonomous weapons — or lose a $200 million Pentagon contract and face a government blacklist, as CNN reported. Anthropic said the Pentagon’s proposed compromise language was “paired with legalese that would allow those safeguards to be disregarded at will.” Amodei held the line. Trump then ordered every federal agency to stop using Anthropic’s technology, posting on Truth Social that “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” OpenAI stepped in and reached a deal after Anthropic had been designated a supply chain risk, and Google is deepening its ties as well.

"Safety Theater"

"Safety Theater"

Jacob Ward
·
Mar 5
Read full story

This is a White House that has so far rewarded compliance and punished independence. As a result, here’s what we’re likely to see next.

User's avatar

Continue reading this post for free, courtesy of Jacob Ward.

Or purchase a paid subscription.
© 2026 Jacob Ward · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture