Why US companies should care about the EU AI Act

If you run an AI or SaaS business in the US, the EU AI Act can feel like déjà vu: another big Brussels law you can safely ignore until the lawyers shout loud enough, and then you do the bare minimum, like many did with GDPR.
The events of 19 November don’t really support that strategy.
With its Digital Omnibus package, the European Commission did exactly not do what many in the industry were hoping for. It did not move the AI Act’s high-risk deadline. It did not “stop the clock”. The Commission focused on tidying up parts of the digital rulebook, strengthening the EU AI Office, and proposing a narrow, one-year staging of some transparency penalty enforcement (for AI-interaction notices and synthetic content labelling). Those proposals now go to the Council and Parliament in the ordinary legislative process.
The key date remains 2 August 2026. From that point, many AI systems cannot legally be placed on the EU market without CE marking, technical documentation and a conformity assessment. The media can keep debating “delay”; the legal timeline is now anchored.
If you sell AI into Europe, or plan to, you are effectively running on two clocks:
-
the legal clock to 2 August 2026, and
-
the customer clock, which starts ticking as soon as EU buyers demand proof that you have a credible AI-Act plan.
The rest of this article is about why both clocks matter to a sceptical US company.
1. This is not GDPR 2.0, it’s product safety and market access
GDPR was mainly about behaviour: what you do with personal data, how you handle consent, what you disclose in a privacy notice. Many US companies treated it as a paperwork exercise that could be tidied up later.
The high-risk part of the AI Act is built on a different chassis. It sits on top of the EU’s product-safety and market-surveillance framework. From the EU’s perspective, a high-risk AI system looks less like “just another cloud service” and more like a regulated product: an X-ray machine, an industrial control system, a safety component in a railway.
For those systems, the rules are simple:
-
you perform a conformity assessment,
-
you draw up an EU Declaration of Conformity,
-
you affix a CE mark, and
-
you maintain a technical file that can withstand scrutiny.
Without those, you are not in “we might get a warning someday” territory. You are selling a product that EU law says simply should not be on the market. Market-surveillance authorities, customs and even your distributors and major customers are empowered – indeed, sometimes obliged – to block or withdraw it.
So the mental upgrade for a US exec is:
GDPR was a compliance project.
The AI Act high-risk regime is a market-access condition.
2. Your EU customers will enforce this long before Brussels does
You don’t need a European regulator to come knocking on your San Francisco office for the AI Act to start hurting. Your European customers will feel the pressure first.
Banks, insurers, telcos, healthcare providers, car makers, industrial firms: these are exactly the organisations that will themselves be deployers of high-risk AI under the Act. Their supervisors will ask:
-
which AI systems they are using,
-
how those systems were classified, and
-
what evidence there is that the providers meet the Act’s requirements.
The rational response from those customers is to push risk onto their vendors. We are already seeing EU buyers:
-
add AI-Act sections to RFPs, security reviews and vendor due-diligence,
-
ask for a clear plan to reach CE marking by a defined date, and
-
update MSAs and SaaS agreements with AI-Act representations and warranties, cooperation duties and sometimes indemnities.
In other words, even if you personally think enforcement will be slow, your customers’ legal and compliance teams will start enforcing the Act through procurement and contracts.
You’ll feel that long before you ever see an enforcement press release. You simply start losing deals, or being carved out of strategic deployments, to vendors who can tell a more convincing AI-Act story.
3. Your competitors can weaponise the AI Act against you
This is where the law becomes more than a compliance headache and starts to look like a competitive weapon.
If a competitor wants to hurt you, they don’t need a regulator. They can use the AI Act and national unfair-competition law to get you switched off.
Here’s how that can play out.
You and a European rival both sell AI systems for, say, credit scoring or automated candidate screening. They invest in compliance: they classify the system as high-risk, they build an AI QMS, they run documented tests, they assemble a technical file and go through a conformity assessment. You decide to wait and see. There is no CE mark, no structured documentation, just whatever the engineers keep in internal wikis and notebooks.
Your competitor files a case in Germany and argues that:
-
you are placing a high-risk AI system on the EU market,
-
the AI Act requires CE, an EU Declaration of Conformity and a technical file, and
-
you cannot produce them.
In that situation, a German court can treat your behaviour as a market-conduct violation and grant a preliminary injunction that forbids you from selling or operating the system in that market until the case is resolved or you can prove conformity. Courts there are used to dealing with CE-related disputes in other sectors; they know what missing paperwork looks like.
The result is not a theoretical fine in five years. It is an order that effectively switches you off in a key market. Main proceedings can easily stretch to twelve to eighteen months if contested. Even if you eventually update your documentation and persuade the court that you are compliant, you may have:
-
lost a year or more of European revenue,
-
handed market share to your rival, and
-
picked up “AI product banned in Europe” headlines you did not need.
Is this outcome guaranteed in every case? No. Is it legally plausible enough that a determined competitor could try it? Absolutely.
For a US company that cares about its European expansion, that risk alone makes “we’ll just ignore it” a dangerous bet.
4. Global convergence: The Brussels Effect
Another objection US teams raise is: “We can’t optimise for every jurisdiction. Why should the EU’s view of AI dictate what we build?”
Fair point, but the choice is not really “EU or US”. The direction of travel is the same in most serious jurisdictions: governance, risk management, testing and auditability.
In the US, sector regulators are already moving. Banking supervisors talk about model risk management and third-party risk. Healthcare and life-sciences regulators are issuing AI-related guidance. Securities and labour regulators are looking closely at automated decision-making. You may never see a single, omnibus “US AI Act”, but you will see AI expectations embedded in supervisory letters, exams and enforcement.
Meanwhile, other jurisdictions – UK, Canada, Singapore, UAE, Saudi Arabia and others – are developing AI frameworks that may not copy the EU text, but echo its themes. They all want to know:
-
who is accountable,
-
how systems are tested and monitored, and
-
whether there is an audit trail.
If you take the AI Act seriously for your high-risk systems. classify them, build an AI QMS, document data and models properly, maintain a coherent technical file, you are not just “doing Brussels a favour”. You are building an operating model that you can re-use with:
-
US regulators and examiners,
-
global auditors, and
-
enterprise customers in every region.
The alternative is to treat each new law as a separate fire drill, with no shared structure. That usually costs more and gives you less control.
5. You don’t have to like the AI Act to turn it into an advantage
Many US founders and executives will continue to insist that the AI Act is over-engineered, hard to enforce and likely to be watered down in practice. Some competitors will simply refuse to engage until they are forced to.
You can use that reluctance.
You do not need a gold-plated, consultant-designed compliance programme. But you do need to be able to look a serious EU buyer in the eye and say, without bluffing:
-
whether your product is, or is not, a high-risk system under the AI Act,
-
how you plan to reach CE marking and by when,
-
what governance and evidence you already have in place, and
-
how you will share that information contractually.
That is a higher bar than “we added a privacy notice and a checkbox”. It is still a finite bar. And if you can meet it while others are still arguing the law will never bite, you become the default safe choice for risk-sensitive EU customers.
You also put yourself in a better position when the inevitable AI-Act clauses appear in contracts. Signing a warranty that your system will comply by a certain date is much less scary when you actually have a plan and a technical file in progress.
You don’t have to love the AI Act. You can treat it as a slightly heavy-handed way of forcing you to do things you probably should be doing anyway: know what your systems are doing, test them properly, document them properly, and take responsibility for them.
Q&A A few blunt questions US companies tend to ask
“We don’t have an EU entity. Can the AI Act really touch us?”
Yes. The Act looks at where systems are placed on the market or used, not where the corporate entity lives. If you sell into the EU or operate high-risk AI that affects EU users, you are in scope. Even if a regulator never calls you directly, your EU customers will put AI-Act obligations into your contracts.
“What are the real chances someone notices if we ignore this?”
Pretty high. Your customers will notice. Your competitors might decide to notice. And if a regulator investigates your customer, they will follow the trail to your product. Unlike GDPR, there are more people with standing and incentives to make noise.
“Could a competitor really get us banned from the market?”
They can certainly try, and the law gives them tools. In some member states, especially Germany, courts are used to dealing with CE and product-safety cases. If you are clearly in a regulated high-risk category and you cannot show CE or a serious technical file, it is not a stretch for a court to grant a temporary injunction. That is enough to do real damage.
“What is the smallest serious step we can take in the next six months?”
Start by classifying your products, naming an internal owner, and agreeing a date by which you intend to be CE-ready for any high-risk systems. Build a small but real evidence pack: a short description of your AI QMS, an overview of tests and evaluations you already run, a skeleton index of what will go into your technical file, and a draft assurance note you can show to EU customers. From there, you can decide how much further you want to go. The important thing is that you move out of the “we’ll ignore it like GDPR and hope” category.
The Modulos AI Governance Platform has been designed to help organizations get ready for the EU AI Act. If your product is, or may be, high risk, the platform can speed up your compliance process by up to 90% by connecting your sources – code, docs, logs – and start showing you the gaps and what you can do about them.
Ready to Transform Your AI Governance?
Discover how Modulos can help your organization build compliant and trustworthy AI systems.