In a stunning and decisive bipartisan 99-1 vote, the U.S. Senate has pulled the plug on a proposal that would have blocked states from regulating artificial intelligence (AI) for the next decade. Tucked inside a broader tax-and-spending package, the failed provision would have tied AI funding to states’ willingness to surrender their ability to enact AI laws. Now, with that moratorium off the table, states are back in the driver’s seat—each with the power to write its own rules for how AI is used, misused, or even weaponized.

This is a nightmare scenario for the tech industry. AI leaders like OpenAI and Google backed the moratorium, claiming that a patchwork of conflicting state laws would make innovation harder and slow U.S. competitiveness against countries like China. Venture capital firms and lobbying groups echoed those fears, warning of regulatory chaos.
They weren’t wrong about the chaos.
Without a federal framework in place, AI developers now face the very real possibility of navigating 50 different sets of laws, each with its own rules on data privacy, deepfakes, algorithmic accountability, and protections for kids. From California’s audits and transparency mandates to New York’s RAISE Act requiring developers to report AI failures, the rules are multiplying fast. Even more distressing, these differing and potentially contradictory laws, will all likely be written by state lawmakers who are relative neophytes when it comes to understanding the complex nature of AI.
The moratorium’s supporters—led by Senator Ted Cruz—tried to soften the blow by cutting the proposed ban from 10 years to five and carving out exceptions for child safety and voice replication laws like Tennessee’s ELVIS Act. Even that watered-down version failed. Ultimately, Senator Marsha Blackburn joined with Democrat Maria Cantwell to strike the provision entirely, sending a message that states won’t be strong-armed into surrendering their authority.
While the tech sector is bracing for regulatory whiplash, state attorneys general and legislators are already celebrating. They’ve passed laws on robocalls, sexually explicit deepfakes, and AI use in housing and employment. Many of these laws could have been gutted had the moratorium passed. The AI-specific laws that AI companies have to comply with is about to go from dozens, to thousands, many of which will likely be written by people with little to no understanding of what AI is and how the wrong regulation could decimate the entire industry.
Buckle Up
AI isn’t just about coding and servers. There are valid concerns about AI, especially when it comes to safety, fairness, and privacy. Without rules, we’ve seen AI make decisions that affect people’s jobs, housing, and access to justice. There is no doubt that some form of AI regulation is necessary. The only question is whether this regulation should be implemented at the state or federal level. Regulating AI at the state level will cripple AI companies, miring them in fifty times the regulatory red tape they would have to address if AI were solely regulated at the federal level. Many burgeoning AI companies will simply fail due to the intractable regulation that is to come. All the while, AI competitors in companies like China will soar ahead, unfettered by some parochial luddite lawmaker proclaiming “Fire BAD.”
Thankfully the fight is not over. Cruz is hinting that he may try to push this regulation through again, buttressed by the full support of powerhouse AI companies. Until then, every AI provider should buckle up. The era of one AI rulebook just got a lot farther away—and the age of 50 different ones has just begun.
Recent Comments