Back to Blog
December 13, 2025

Does the EU AI Act apply to US companies? What you need to know

By Modulos10 min read
Does the EU AI Act apply to US companies? What you need to know

Updated April 2026

Short answer: the EU AI Act applies to any company placing AI systems on the EU market or whose AI outputs are used in the EU, regardless of where the company is incorporated. If you sell AI into Europe, you are in scope.

If you run an AI or SaaS business in the US, the EU AI Act can feel like déjà vu: another big Brussels law you can safely ignore until the lawyers shout loud enough, and then you do the bare minimum, like many did with GDPR. That instinct is wrong this time, for reasons this post will lay out.

What actually happened with the Digital Omnibus

On 19 November 2025 the European Commission published its Digital Omnibus package. The political headline most US founders heard ("EU delays AI Act") was never quite accurate. What the Commission actually did: tidy up parts of the digital rulebook, strengthen the EU AI Office, and propose a narrow one-year staging of some transparency penalty enforcement (for AI-interaction notices and synthetic-content labelling).

Since then the Omnibus has moved into the ordinary legislative process. As of April 2026 it is in trialogue between Council, Parliament and Commission. The current proposals on the table include pushing the standalone high-risk deadline from 2 August 2026 to 2 December 2027, and embedded products to 2 August 2028. Those numbers are not final and the trialogue could land in several places.

Two things you should take from that. First, the direction of the political wind is toward giving industry more time, not less obligation. Second, nothing you should be doing anyway changes. The question is still: "can you sell a high-risk AI system into the EU with CE marking, a technical file and a conformity assessment?" The deadline may move by 16 months. The answer you eventually have to deliver does not change.

So you are running on two clocks: the legal clock, now uncertain between August 2026 and December 2027, and the customer clock, which started ticking as soon as EU buyers began asking for proof. The customer clock is the one that actually matters.

1. This is not GDPR 2.0, it is product safety and market access

GDPR was mostly about behaviour: what you do with personal data, how you handle consent, what you disclose in a privacy notice. Many US companies treated it as paperwork that could be tidied up later. They got away with it.

The high-risk part of the AI Act sits on a different chassis. It is built on top of the EU's product-safety and market-surveillance framework. From the EU's perspective, a high-risk AI system is less like "another cloud service" and more like a regulated product: an X-ray machine, an industrial controller, a railway safety component.

For those products the rules are simple. You perform a conformity assessment. You draw up an EU Declaration of Conformity. You affix a CE mark. You maintain a technical file that can withstand scrutiny. Without those, you are not in "we might get a warning someday" territory. You are selling a product that EU law says should not be on the market, and market-surveillance authorities, customs, distributors and even major customers are empowered (and sometimes obliged) to block or withdraw it.

The mental upgrade for a US exec: GDPR was a compliance project. The AI Act high-risk regime is a market-access condition.

2. Your EU customers will enforce this long before Brussels does

You do not need a European regulator knocking on your San Francisco office for the AI Act to hurt. Your European customers will feel the pressure first and push it onto you.

Banks, insurers, telcos, healthcare providers, car makers, industrial firms: these are the organizations that will themselves be deployers of high-risk AI under the Act. Their supervisors will ask which AI systems they use, how those systems are classified, and what evidence exists that the providers meet the Act's requirements. The rational response from the customer is to push risk onto the vendor.

We are already seeing EU buyers add AI Act sections to RFPs, security reviews and vendor due diligence; ask for a clear plan to reach CE marking by a specific date; and update MSAs and SaaS agreements with AI Act representations, cooperation duties and sometimes indemnities.

You will feel that long before you ever see an enforcement press release. You start losing deals, or being carved out of strategic deployments, to vendors who can tell a more convincing AI Act story.

3. Your competitors can weaponize the AI Act against you

This is where the law becomes more than a compliance headache and starts to look like a competitive weapon. If a competitor wants to hurt you, they do not need a regulator. They can use the AI Act and national unfair-competition law to get you switched off.

Concrete example. You and a European rival both sell AI systems for credit scoring or automated candidate screening. They invest in compliance: classify the system as high-risk, build an AI QMS, run documented tests, assemble a technical file, go through conformity assessment. You decide to wait and see. No CE mark, no structured documentation, just whatever the engineers keep in internal wikis.

Your competitor files a case in Germany. They argue that you are placing a high-risk AI system on the EU market, that the Act requires CE, an EU Declaration of Conformity and a technical file, and that you cannot produce them.

A German court can treat that as a market-conduct violation and grant a preliminary injunction forbidding you from selling or operating the system in that market until the case is resolved or you prove conformity. German courts are used to CE-related disputes in other sectors; they know what missing paperwork looks like. The result is not a theoretical fine in five years. It is an order that effectively switches you off in a key market. Main proceedings can stretch to twelve or eighteen months if contested. Even if you eventually update your documentation and win, you may have lost a year of European revenue, handed market share to your rival, and picked up "AI product banned in Europe" headlines you did not need.

Guaranteed in every case? No. Plausible enough that a determined competitor might try? Yes. For a US company that cares about European expansion, that risk alone makes "we'll just ignore it" a bad bet.

4. Global convergence: the Brussels Effect

Another objection US teams raise: "we can't optimize for every jurisdiction. Why should the EU's view of AI dictate what we build?"

Fair, but the choice is not really "EU or US". The direction of travel is the same in most serious jurisdictions: governance, risk management, testing, auditability. In the US, sector regulators are already moving. Banking supervisors talk about model risk management and third-party risk. Healthcare and life-sciences regulators are issuing AI guidance. Securities and labour regulators are looking at automated decision-making. You may never see a single omnibus "US AI Act", but you will see AI expectations embedded in supervisory letters, exams and enforcement actions.

Other jurisdictions (UK, Canada, Singapore, UAE, Saudi Arabia) are developing AI frameworks that may not copy the EU text but echo its themes: who is accountable, how systems are tested and monitored, whether there is an audit trail.

If you take the AI Act seriously for your high-risk systems, classify them, build an AI QMS, document data and models properly, maintain a coherent technical file, you are not doing Brussels a favour. You are building an operating model you can reuse with US regulators, global auditors, and enterprise customers in every region. The alternative is treating each new law as a separate fire drill with no shared structure, which costs more and gives you less control. For the cross-framework view see our piece on AI governance frameworks compared.

5. You do not have to like the AI Act to turn it into an advantage

Many US founders will continue to insist the AI Act is over-engineered, hard to enforce, and likely to be watered down. Some competitors will refuse to engage until they are forced to. You can use that reluctance.

You do not need a gold-plated consultant-designed compliance programme. You do need to be able to look a serious EU buyer in the eye and say, without bluffing, whether your product is or is not a high-risk system under the Act, how you plan to reach CE marking and by when, what governance and evidence you already have in place, and how you will share that information contractually.

That is a higher bar than "we added a privacy notice and a checkbox". It is still a finite bar. Meet it while others are still arguing the law will never bite and you become the default safe choice for risk-sensitive EU buyers. You also sign AI Act warranty clauses with a clear conscience, because you actually have a plan and a technical file in progress.

You do not have to love the AI Act. Treat it as a slightly heavy-handed way of forcing you to do things you should be doing anyway: know what your systems are doing, test them properly, document them properly, take responsibility for them.

A few blunt questions US companies tend to ask

"We don't have an EU entity. Can the AI Act really touch us?" Yes. The Act looks at where systems are placed on the market or used, not where the corporate entity lives. If you sell into the EU or operate high-risk AI that affects EU users, you are in scope. Even if a regulator never calls you directly, your EU customers will put AI Act obligations into your contracts.

"What are the real chances someone notices if we ignore this?" High. Your customers will notice. Your competitors might decide to notice. If a regulator investigates your customer, they will follow the trail to your product. Unlike GDPR, there are more people with standing and incentives to make noise.

"Could a competitor really get us banned from the market?" They can try, and the law gives them tools. In some member states, especially Germany, courts routinely handle CE and product-safety cases. If you are clearly in a regulated high-risk category and cannot show CE or a serious technical file, a temporary injunction is not a stretch. That alone can do real damage.

"What is the smallest serious step we can take in the next six months?" Classify your products. Name an internal owner. Agree a date by which you intend to be CE-ready for any high-risk systems. Build a small but real evidence pack: a short description of your AI QMS, an overview of the tests and evaluations you already run, a skeleton index of what goes into your technical file, and a draft assurance note you can show to EU customers. From there you can decide how much further to go. The point is to move out of the "we'll ignore it like GDPR and hope" category.

The Modulos AI Governance Platform is built to help organizations get ready for the EU AI Act. If your product is, or may be, high risk, the platform can cut your compliance effort by up to 90% by connecting your sources (code, docs, logs) and showing you the gaps and what to do about them.

Ready to Transform Your AI Governance?

Discover how Modulos can help your organization build compliant and trustworthy AI systems.