Brett Trout
Artificial Intelligence (AI) is changing the way we work and live. But instead of helping businesses make the most of this new tool, states like California, Texas, Utah, and Colorado are putting up roadblocks. Their new AI laws are confusing, expensive to follow, and could stop companies from using helpful AI tools to increase productivity.

Businesses Are Scared to Use AI, for Good Reason
I recently spoke with manufacturers at a large business conference after a presentation on these new AI-focused state laws. Each national manufacturer I spoke with was concerned about how these new laws might affect the implementation of AI into their businesses. They were worried that using AI might get them into legal trouble, even though no one knows exactly what these new laws mean or how courts might interpret them. Not surprisingly, none of these business owners wanted to be the first to test these new laws in court.
Why These Laws Don’t Work
The goal of any law should be to solve problems without causing new ones. But these new state laws do the opposite. These states are creating a patchwork of rules that do not even match up with each other. Each of these states has its own language and its own set of requirements.
Imagine trying to run a business that uses AI across 50 states. You would have to hire lawyers to:
- Read and explain every single law
- Update your systems to meet different rules in each state on an ongoing basis
- Take an educated guess as to how courts may interpret these new laws
- Propose policies to avoid breaking any of these ambiguous, untested laws
- Defend you if one of those states decides your AI tool breaks its law
That is a lot of money and a lot of time. And none of it helps businesses grow or make better products. Instead, it keeps lawyers busy and businesses nervous.
California’s New Law: A Case Study in Overreach
Let’s look at just one of these new laws, California’s SB243. It sounds good on the surface—trying to protect kids from harmful AI content. But in practice, it is far too broad. The law applies even to general tools like ChatGPT, requiring operators to use “evidence-based methods” to detect suicidal thoughts. That may sound good in theory, but what are those “methods”? And how do you apply them in a way that does not limit up the experience for the 99% of users who are not at risk?
The new law also requires AI companies to:
- Notify crisis service providers
- Detect and remove harmful content
- Report every year on their efforts
- Follow vague rules on what to do if the user might be a minor
Who decides which online user “might” be a minor? How do you make that call without violating someone’s privacy? How much will this cost? How much of a burden does this place on current users? And how is a startup supposed to afford this on top trying to comply with all of the similar, but different, laws from other states?
The Real Cost of Compliance
The more states that pass these kinds of laws, the more money companies have to spend just to comply with all of these new laws. That is money they could be using to make better products, hire more people, and/or cut prices. Instead, these companies are hiring lawyers and compliance officers to deal with confusing rules that could change at any moment.
Worse, these laws are helping big tech at the expense of startups. Big companies have the money to keep up with state-by-state regulations. Small companies do not. That is unfair, and not how you grow innovation.
These Laws Are Not Like Property Laws
Some argue that this is no different than states having different laws pertaining to a lot of things, such property. That is a deceptive analogy. Property laws are based on centuries of experience with property, as well thousands of cases and judges who understand those cases. Conversely, these new AI state laws are vague, untested, and written in ways even judges might not understand. With AI, there is no playbook to follow. It is simply a guessing game with huge ramifications.
What is Really Going On Here
This is classic rent-seeking, politicians making laws that look good on paper but end up helping big companies which have the big money necessary to influence legislation to their ends. Rather than admit that all of this red tape gives big tech a competitive advantage, lawmakers play up the emotional angle, with “think of the children” style arguments. This makes it hard for smaller companies to oppose the legislation, or even ask questions, about these onerous laws that only big companies can afford to follow.
The result? Less innovation, fewer startups, and more wasted money.
A Better Way Forward
Instead of 50 different sets of rules, we need:
- One clear national standard
- Coordination between government and international organizations
- Laws that solve real problems without creating new ones
And we need lawmakers who understand the cost of getting this wrong.
Final Thoughts
If we continue down this path, we will strangle the very innovation these state laws claim to protect. If you want to protect the public, fine. But do it in a way that does not kill small business in the process.
Let’s pass smart, federal AI laws, not a patchwork of state AI laws that make it impossible for anyone but the biggest players to compete.



Recent Comments