Trump’s Executive Order Challenging State AI Laws Sparks Debate and Uncertainty

On Thursday evening, President Donald Trump issued an executive order instructing federal agencies to contest state-level AI regulations. The administration’s stance is that startups require protection from a fragmented regulatory landscape. However, legal professionals and emerging companies suggest this order might extend periods of unpredictability, leading to protracted legal disputes. This could force new businesses to contend with evolving state mandates while awaiting a potential unified national policy from Congress.

Designated “Ensuring a National Policy Framework for Artificial Intelligence,” the executive order mandates the Department of Justice to establish a task force within 30 days. This task force will be responsible for disputing specific state laws, asserting that AI falls under interstate commerce and thus necessitates federal oversight. Additionally, the Commerce Department is given 90 days to identify state AI laws deemed “onerous,” which could impact states’ access to federal funding, such as broadband grants.

Furthermore, the directive requests the Federal Trade Commission and Federal Communications Commission to investigate federal standards capable of superseding state regulations. It also charges the administration with collaborating with Congress to formulate a consistent AI legislative framework.

This order arrives amidst ongoing efforts to curtail varying state-level AI regulations, particularly after attempts in Congress to halt state regulatory actions failed. Bipartisan legislators have contended that preventing states from legislating without a national benchmark could expose consumers to risks and allow companies to operate largely without oversight.

Michael Kleinman, the U.S. Policy head at the Future of Life Institute—an organization dedicated to mitigating severe risks from advanced technologies—stated that “This executive order, steered by David Sacks, benefits Silicon Valley titans who leverage their D.C. influence to protect themselves and their businesses from responsibility.”

David Sacks, who serves as Trump’s top advisor on AI and crypto policy, has been a prominent advocate for the administration’s initiative to preempt state AI laws.

Even those who favor a unified national framework acknowledge that this order does not establish one. Given that state laws remain enforceable until courts intervene or states halt their implementation, emerging businesses might experience a prolonged period of regulatory uncertainty.

Sean Fitzpatrick, CEO of LexisNexis North America, U.K., and Ireland, informed TechCrunch that states are expected to defend their authority regarding consumer protection in legal proceedings, with these cases potentially reaching the Supreme Court.

Though proponents suggest the order might lessen ambiguity by consolidating the debate on AI regulation in Washington, D.C., detractors contend that the impending legal conflicts will present immediate challenges for startups grappling with divergent state and federal requirements.

Hart Brown, the lead author of Oklahoma Governor Kevin Stitt’s Task Force on AI and Emerging Technology recommendations, explained to TechCrunch that “Startups, focused primarily on innovation, generally lack robust regulatory governance programs until they achieve a scale where such a program becomes essential. Developing these programs can be costly and time-intensive in a rapidly changing regulatory landscape.”

Arul Nigam, co-founder of Circuit Breaker Labs, a startup specializing in red-teaming for AI-powered conversational and mental health chatbots, expressed similar apprehensions.

Nigam conveyed to TechCrunch the uncertainty regarding whether [AI companion and chatbot companies] must self-regulate, emphasizing that the fragmented state AI laws are detrimental to smaller startups in his sector. He posed questions like, “Are there open-source standards they should follow? Should they proceed with development?”

He also shared his optimism that Congress might now expedite the creation of a more robust federal framework.

Andrew Gamino-Cheong, CTO and co-founder of AI governance firm Trustible, informed TechCrunch that the Executive Order (EO) is likely to impede AI innovation and broader AI development objectives. He stated, “Larger tech companies and established AI startups possess the resources to employ legal counsel for guidance or can simply mitigate risks. This ambiguity disproportionately affects startups, particularly those unable to secure significant funding readily.”

Gamino-Cheong further explained that legal uncertainty complicates sales to clients sensitive to risk, such as legal departments, financial institutions, and healthcare providers. This increases sales cycles, systems integration efforts, and insurance expenses. He warned that “Even the mere perception of unregulated AI will diminish public trust,” which is currently low and jeopardizes widespread adoption.

Gary Kibel, a partner at Davis + Gilbert, commented that businesses would favor a singular national standard, but “an executive order isn’t necessarily the appropriate mechanism to supersede laws properly established by states.” He cautioned that the existing uncertainty could lead to either highly restrictive regulations or a complete lack of oversight, both scenarios potentially fostering a “Wild West” environment where major tech companies, with their capacity to absorb risk, would have an advantage.

Concurrently, Morgan Reed, president of The App Association, implored Congress to promptly implement a “comprehensive, focused, and risk-aware national AI framework.” He emphasized, “We cannot tolerate a fragmented system of state AI laws, and an extended legal battle over the Executive Order’s legality is no improvement.”

Leave a Reply