Washington is nearing a decision on artificial intelligence regulation for the first time, with the core conflict revolving not around the technology itself, but rather the jurisdictional authority over its oversight.
Given the lack of a substantial federal AI framework prioritizing consumer protection, numerous states have advanced legislation to safeguard their citizens from AI-induced damages, such as California’s SB-53 AI safety bill and Texas’s Responsible AI Governance Act, which forbids the deliberate misuse of AI technologies.
Prominent tech corporations and innovative Silicon Valley startups contend that these varied state-level regulations result in a chaotic, impractical system that stifles technological advancement.
Josh Vlasto, co-founder of the pro-AI political action committee Leading the Future, expressed to TechCrunch that “This approach will impede our progress in the competitive race against China.”
The technology sector, along with some of its former members now serving in the White House, advocates for either a unified national standard or no regulation whatsoever. Within this high-stakes struggle, fresh initiatives have surfaced aiming to prevent states from implementing their own AI laws.
Reports indicate that House legislators are attempting to leverage the National Defense Authorization Act (NDAA) to restrict state AI legislation. Concurrently, a leaked White House executive order draft also signals robust backing for federal preemption of state-level AI regulatory initiatives.
However, broad federal preemption, which would strip states of their authority to regulate AI, faces significant opposition in Congress; earlier this year, a similar moratorium was overwhelmingly rejected. Legislators assert that in the absence of a federal framework, preventing states from acting would leave consumers vulnerable to risks and allow tech firms to operate without accountability.
In pursuit of establishing such a national standard, Representative Ted Lieu (D-CA) and the bipartisan House AI Task Force are developing a comprehensive set of federal AI bills encompassing various consumer safeguards, including measures against fraud, for healthcare, transparency, child safety, and catastrophic risk. A sweeping bill of this nature would probably require many months, possibly years, to be enacted, highlighting why the present push to curb state authority has evolved into one of the most fiercely debated aspects of AI policy.
Defining the Conflict: The NDAA and Executive Order
In recent weeks, initiatives aimed at preventing states from enacting AI regulations have intensified.
According to Majority Leader Steve Scalise (R-LA) in a statement to Punchbowl News, the House has explored embedding provisions within the NDAA that would restrict states from regulating AI. Politico reported that Congress was reportedly aiming to finalize an agreement on the defense legislation before Thanksgiving. A TechCrunch source familiar with the discussions indicated that negotiations have concentrated on refining the scope to potentially maintain state jurisdiction over matters such as child safety and transparency.
Concurrently, a leaked draft of a White House Executive Order outlines the administration’s potential preemption approach. This EO, which is reportedly currently paused, would establish an “AI Litigation Task Force” to contest state AI laws in court, instruct agencies to assess state regulations considered “burdensome,” and encourage the Federal Communications Commission and Federal Trade Commission to adopt national standards that supersede state-level rules.
Significantly, the Executive Order would grant David Sacks – designated as Trump’s AI and Crypto Czar and co-founder of the venture capital firm Craft Ventures – shared leadership in developing a consistent legal framework. This provision would afford Sacks direct sway over AI policy, exceeding the standard responsibilities of the White House Office of Science and Technology Policy and its director, Michael Kratsios.
Sacks has publicly supported prohibiting state regulations and minimizing federal oversight, preferring instead industry self-regulation to “foster maximum growth.”
The Argument for a Fragmented Regulatory Landscape
Sacks’ stance aligns with the prevailing view within a significant portion of the AI industry. In recent months, multiple pro-AI super PACs have been established, channeling hundreds of millions of dollars into local and state elections to campaign against candidates who advocate for AI regulation.
Leading the Future – financially supported by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale – has amassed over $100 million. This week, the PAC initiated a $10 million campaign urging Congress to establish a federal AI policy that would preempt state-level regulations.
Vlasto conveyed to TechCrunch, “To foster innovation within the tech sector, it’s unfeasible to have a scenario where numerous laws emerge from individuals who may lack the requisite technical understanding.”
He asserted that a diverse collection of state regulations would “hinder our competitive standing against China.”
Nathan Leamer, executive director of Build American AI, the advocacy branch of the PAC, affirmed that the organization favors federal preemption even in the absence of explicit AI-focused consumer safeguards. Leamer contended that current legislation, such as those pertaining to fraud or product liability, are adequate for managing AI-related damages. While state laws frequently aim to avert issues proactively, Leamer advocates for a more responsive strategy: allowing companies to innovate rapidly and resolve any problems through legal channels subsequently.
No Federal Preemption Without State Input
Alex Bores, a New York Assembly member and Congressional candidate, is among the initial targets of Leading the Future’s efforts. He authored the RAISE Act, which mandates that major AI laboratories implement safety protocols to avert severe damages.
Bores explained to TechCrunch, “I recognize the immense potential of AI, which is precisely why sensible regulations are crucial. Ultimately, the AI systems that will succeed in the market are those that are trustworthy, and often the market tends to undervalue or offer insufficient short-term incentives for investing in safety.”
While Bores supports a national AI policy, he maintains that states possess the agility to respond more swiftly to developing threats.
Indeed, states demonstrate a faster pace of action.
By November 2025, thirty-eight states had enacted over 100 AI-specific laws within the current year, primarily addressing deepfakes, requirements for transparency and disclosure, and AI deployment by government entities. (A recent analysis revealed that 69% of these laws place no obligations on AI developers whatsoever.)
Congressional activity further corroborates the notion of slower federal progress compared to states. Although hundreds of AI bills have been proposed, only a handful have successfully passed. Since 2015, Representative Lieu has presented 67 bills to the House Science Committee, with merely one achieving enactment.
Over 200 lawmakers endorsed an open letter against preemption within the NDAA, asserting that “states function as democratic laboratories” and must “preserve the adaptability to address emerging digital issues.” Additionally, close to 40 state attorneys general submitted an open letter expressing opposition to a prohibition on state AI regulation.
Cybersecurity specialist Bruce Schneier and data scientist Nathan E. Sanders, co-authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, contend that concerns regarding fragmented regulations are exaggerated.
They point out that AI companies already adhere to more stringent EU regulations, and the majority of industries successfully navigate diverse state laws. Their assertion is that the underlying motivation is to evade accountability.
Potential Structure of a Federal Standard
Representative Lieu is currently preparing a comprehensive bill exceeding 200 pages, with an anticipated introduction in December. This legislation addresses diverse matters, including penalties for fraud, safeguards against deepfakes, protections for whistleblowers, computational resources for academic institutions, and mandatory testing and transparency requirements for large language model developers.
This final provision would compel AI laboratories to rigorously test their models and disseminate the findings, a practice currently undertaken voluntarily by most. While Lieu has not yet formally introduced the bill, he noted that it refrains from mandating direct review of AI models by federal agencies. This approach contrasts with a comparable bill put forth by Senators Josh Hawley (R-MS) and Richard Blumenthal (D-CN), which would necessitate a government-managed evaluation system for advanced AI technologies prior to their implementation.
Lieu admitted that his proposed legislation would be less stringent, but he believed it held a greater likelihood of becoming law.
Lieu stated, “My objective is to enact legislation this term,” pointing out that House Majority Leader Scalise holds an openly adversarial view toward AI regulation. He added, “I’m not crafting a bill based on what I’d desire as a sovereign; rather, I’m endeavoring to create a bill capable of passing through a Republican-majority House, a Republican-majority Senate, and a Republican-controlled White House.”