The European Union just released a new set of guidelines—the “AI Code of Practice”—to help companies comply with its landmark Artificial Intelligence Act. While the AI Act itself doesn’t go into full effect until 2026, the EU made it clear that companies need to start preparing now. This new Code of Practice gives businesses a little heads up, but it is also a warning: the grace period is short, and noncompliance will get expensive.

What’s in the Code of Practice?
The Code focuses on foundation models—large AI systems that power tools like ChatGPT, image generators, and decision-making engines. Companies building or deploying these models must follow rules about transparency, risk management, and protecting fundamental rights. In plain English, that means if you’re using AI to make decisions about people, you better be able to explain how it works and make sure it’s not biased.
Unless they have a license from the copyright owner, under the AI Act companies are prohibited from training their AI systems on copyrighted material. Companies will also have to ensure their AI systems comport with the EU Copyright Directive. Companies are required to ensure that their AI systems do not evaluate or classify individuals based on their social behavior or personal traits in a manner that results in a detrimental outcome for those individuals.
One major shift is the EU’s expectation that developers will test their models for accuracy, security, and discrimination. They also need to keep documentation explaining how the AI works and how they trained it. If a company is using copyrighted data, personal information, or third-party systems, they need clear documentation and agreements in place.
Why This Matters
This Code isn’t just a list of suggestions. It’s a roadmap for what enforcement will look like in 2026. Once the AI Act kicks in, companies that fail to follow the rules could face fines of up to 7% of their annual revenue. That’s more than the penalty for violating the austere General Data Protection Regulation.
For U.S. companies, this also signals what’s coming down the pipeline here. We already see movement from the Federal Trade Commission and the U.S. Patent and Trademark Office around AI regulation. Now is the time for American businesses to review their AI systems, audit their data sources, and establish acceptable use policies. Waiting until the government knocks on your door is not a good plan.
Next Steps for Businesses
As these new measures will likely permanently position EU AI providers, and possibly U.S. AI providers, far behind less restrictive countries like China global AI providers may opt to customize different versions of their AI platforms for use in different regions.
If your company is developing or using AI, it is critical to start now:
- Mapping out where and how you use AI in your business.
- Documenting your training data and how your systems work.
- Creating a written AI Acceptable Use Policy for your employees.
- Talking to a lawyer familiar with AI law to get your ducks in a row before enforcement starts.
The EU’s AI Act and Code of Practice are just the beginning. AI laws continue to change quickly. If you want to stay ahead of the curve, and out of court, you need to act now.



Recent Comments