Late Thursday, President Donald Trump issued an executive order instructing federal agencies to contest state-level AI regulations. The administration asserts this move is necessary to alleviate the burden on startups from diverse state rules. However, both legal professionals and emerging companies express concerns that the order might extend ambiguity, potentially initiating protracted legal disputes. This could force new businesses to grapple with evolving state mandates while awaiting a unified national policy from Congress.
Named “Ensuring a National Policy Framework for Artificial Intelligence,” the executive action mandates the Department of Justice to establish a task force within 30 days. Its purpose will be to dispute specific state statutes by asserting that AI falls under interstate commerce, thus requiring federal oversight. Additionally, the Commerce Department has 90 days to identify and list “burdensome” state AI laws, a determination that could impact states’ access to federal funding, such as broadband grants.
Furthermore, the order directs the Federal Trade Commission and Federal Communications Commission to investigate potential federal standards that might supersede state regulations. It also charges the administration with collaborating with Congress to establish a consistent federal AI law.
This executive order emerges amidst growing calls to curtail diverse state-level AI regulations, especially after attempts in Congress to halt state action on regulation faltered. Legislators from across the political spectrum have voiced concerns that, in the absence of a unified federal framework, preventing states from regulating could leave consumers vulnerable and businesses largely unsupervised.
Michael Kleinman, U.S. Policy head at the Future of Life Institute – an organization dedicated to mitigating extreme risks from advanced technologies – stated that “This David Sacks-led executive order represents a boon for Silicon Valley magnates leveraging their sway in Washington to protect themselves and their enterprises from oversight.”
David Sacks, who serves as Trump’s top advisor on AI and crypto policy, has been a principal advocate for the administration’s initiative to preempt state AI laws.
Even those who favor a national regulatory structure admit that the order itself does not establish one. Consequently, with state laws remaining valid until judicially overturned or enforcement is voluntarily halted by states, startups might encounter a prolonged period of regulatory flux.
Sean Fitzpatrick, CEO of LexisNexis North America, U.K., and Ireland, informed TechCrunch that states are expected to vigorously defend their consumer protection prerogatives in court, with these legal challenges potentially reaching the Supreme Court.
Proponents suggest the order might diminish ambiguity by consolidating the debate over AI regulation in Washington. However, detractors contend that the ensuing legal confrontations will immediately hinder startups as they attempt to reconcile disparate state and federal requirements.
Hart Brown, the lead author of Oklahoma Governor Kevin Stitt’s Task Force on AI and Emerging Technology recommendations, explained to TechCrunch: “Startups typically prioritize innovation and therefore lack… comprehensive regulatory governance programs until they achieve a size that necessitates such a framework. Developing these programs can be costly and time-intensive, especially within a highly fluid regulatory landscape.”
Arul Nigam, co-founder of Circuit Breaker Labs – a startup specializing in red-teaming for conversational and mental health AI chatbots – reiterated these concerns.
Nigam told TechCrunch, highlighting how the fragmented nature of state AI laws disadvantages smaller startups in his sector, “There’s uncertainty regarding whether [AI companion and chatbot companies] must self-regulate. Are there open-source standards they should follow? Should they proceed with development?”
He also expressed optimism that Congress might now expedite the creation of a more robust federal framework.
Andrew Gamino-Cheong, CTO and co-founder of AI governance firm Trustible, shared with TechCrunch his view that the executive order will impede AI innovation and broader AI objectives. He stated, “Large technology companies and major AI startups possess the resources to employ legal counsel to navigate these challenges or simply mitigate their risks. However, this uncertainty disproportionately affects smaller startups, particularly those without ready access to substantial funding.”
He further elaborated that regulatory vagueness complicates sales to cautious clients such as legal departments, financial institutions, and healthcare providers, leading to longer sales cycles, increased systems integration efforts, and higher insurance premiums. Gamino-Cheong concluded that “Even the belief that AI lacks regulation will diminish confidence in AI,” which is already low and jeopardizes its wider acceptance.
Gary Kibel, a partner at Davis + Gilbert, indicated that while businesses would appreciate a singular national standard, “an executive order may not be the appropriate mechanism to annul laws legitimately passed by states.” He cautioned that the prevailing uncertainty could lead to either highly stringent regulations or a complete lack of action, both scenarios potentially resulting in a “Wild West” environment where major tech companies, with their capacity to absorb risk, would have an advantage.
Concurrently, Morgan Reed, president of The App Association, pressed Congress to swiftly establish a “comprehensive, focused, and risk-informed national AI framework. A fragmented system of state AI laws is untenable, and an extended legal battle challenging the constitutionality of an Executive Order offers no improvement.”