Back to all posts

Federal Bill Threatens State AI Safety Laws

2025-05-29Valerie Hudson5 minutes read
AI Regulation
State Law
Tech Policy

Valerie Hudson

Valerie M. Hudson is a university distinguished professor at the Bush School of Government and Public Service at Texas A&M University. Her views are her own.

A provision quietly added to President Donald Trump’s “big, beautiful bill,” recently passed by the House of Representatives, acts as an unforeseen Trojan horse. It's uncertain if Trump himself is aware of this last-minute Republican insertion. One hopes he will be informed and publicly oppose it, as it starkly contrasts with his efforts to make the internet a safer place for everyone, particularly children.

The Hidden Clause and Its Sweeping Impact

Buried within Section 43201(c) of the extensive 1,118-page budget bill lies a single sentence with far-reaching consequences: “Subsection (c) states that no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”

This seemingly innocuous sentence is poised to cause incredible mischief.

Undermining State-Level AI Protections The Case of Utah

Consider Utah, a state that has been a leader in creating policies to protect children from the negative aspects of our internet-driven culture. Utah was among the first to require online porn sites to implement age verification. More recently, in 2023, Utah mandated that app stores must also verify users’ ages.

Furthermore, Utah has enacted protections for its citizens interacting with so-called “mental health chatbots.” While Utah's proactive stance is commendable, the clause in the federal budget bill would jeopardize the enforcement of all these state-level legal protections. If this sentence remains in the final bill, the federal government would prohibit Utah from enforcing any of its AI-related laws.

As co-editor and contributor to "The Oxford Handbook on AI Governance," I strongly condemn this maneuver. It is not merely irresponsible; it actively prevents state governments from responsibly safeguarding their citizens against the increasingly apparent downsides of unregulated AI deployment. In this light, the insertion of this sentence is patently malicious.

Big Techs Push for Unfettered AI Development

It's evident that technology companies desire unrestricted freedom in pursuing their AI objectives, irrespective of potential harm to Americans. As J.B. Branch and Ilana Beller noted in a commentary, “The provision is a message to the states: Sit down and shut up while Big Tech writes its own rules or continues to lobby Congress to have none at all.”

This failure of Congress to establish any rules, apart from the Take It Down bill championed by Trump and his wife, Melania, is equally indefensible. States have been compelled to act due to Congress's persistent inaction, rightly stepping into the regulatory void.

States Stepping Up A Patchwork of Necessary Regulations

Beyond Utah, numerous states have sought to hold AI-promoting companies accountable for their products. Various states have laws against AI-generated deception that could influence elections. Others require citizens to be notified when AI makes decisions about matters like mortgage approval and to be provided recourse to contest such decisions. Some states prohibit AI voice and image cloning of citizens; others ban medical insurance denials by AI. Many states have legislation setting standards for autonomous vehicles. Which of these vital protections would anyone want to place under a federal moratorium?

The counter-argument, of course, is that AI companies feel constrained by the existing patchwork of state laws. To that, I say, tough. Congress must finally step up, rather than effectively forfeiting its regulatory role. Perhaps the tech companies' complaints about this patchwork are what will finally motivate Congress to act. However, given Congress's apparent inability to deliver more than a lowest common denominator on AI, even if it does act, states will likely need to continue raising the bar.

The Strength of State Level Innovation in AI Governance

And that is a positive development. State-level democracy has proven more vibrant than what currently exists at the federal level. Let the states be the laboratories where best practices in AI regulation can be developed, refined, and then adopted nationwide. AI absolutely requires guardrails. If Congress is currently unable to provide them, let the states perform the job that the federal government seemingly cannot. The 10th Amendment supports states' rights to do so.

The Urgent Need for Guardrails Amidst Rising AI Threats

Make no mistake: these guardrails are more critical than ever as AI capabilities advance. New AI-based threats are emerging that we cannot afford to ignore for another decade. The family of Sewell Setzer III, a 14-year-old boy who took his own life after an AI chatbot allegedly encouraged him to do so, recently achieved a significant court victory against Character.AI. The company had argued for First Amendment protection from liability, a claim a federal judge has now dismissed, allowing the family’s wrongful death suit to proceed.

One of the directors of the AI Now Institute aptly summarized the situation: “The proposal for a sweeping moratorium on state AI-related legislation really flies in the face of common sense. We can’t be treating the industry’s worst players with kid gloves while leaving everyday people, workers and children exposed to egregious forms of harm.”

I wholeheartedly agree. All senators and congresspersons, regardless of party affiliation, should unite to remove this single sentence from the reconciliation budget bill. Contact your representatives immediately.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.