The EU AI Act Is a Product Problem, Not a Legal One

Most product managers first encounter the as a paragraph in a risk report, or a one-liner in a quarterly legal update. Something to be noted and filed. That is precisely the wrong reaction, and the teams that treat it that way are going to discover why sometime around August 2026. Particularly in the UK, where the instinct since 2021 has been to assume EU regulation no longer applies.
The Act is not, primarily, a technology law or a compliance burden to be managed by legal. It is a product governance framework with legal teeth. The obligations it creates map directly onto decisions that product teams make every day: what to build, who the system acts on, how it makes decisions, what oversight users have and how you document all of it. If that sounds like the product manager's job description, it is.
The timeline that is already running
The Act entered into force in August 2024, but the dates that should concern product teams are later.
Audit any feature that could fall under the eight banned use cases.
Document staff AI literacy training. Already overdue.
Stand up risk management, tech docs and human oversight.
Conformity assessment and EU database registration required.
From February 2025, prohibitions on unacceptable-risk AI systems took effect. These cover real-time biometric surveillance in public spaces, AI-powered social scoring systems and systems that exploit psychological vulnerabilities for commercial or political manipulation. For most product teams, these prohibitions are not the primary concern. The less-noticed implication of February 2025 is : from this date, both providers and deployers of AI systems must ensure that employees and operators have a sufficient level of AI literacy. That is not a vague aspiration. It is a documented organisational obligation.
From August 2025, obligations for model providers came into force. If your product integrates a foundation model — GPT-4, Claude, Gemini or Llama — your model provider is now operating under new documentation and transparency requirements. The Code of Practice signed by 26 major AI providers including Microsoft, Google and Anthropic is a voluntary compliance framework, but it signals the direction of travel.
Before going any further, the territorial scope point is worth stating clearly because it is the single most common misreading. The Act applies to any provider whose AI system is placed on the EU market or whose outputs affect people located in the EU — regardless of where the company is based. A London-headquartered fintech lending to German customers, or a Manchester tool used by French departments, is fully in scope under Article 2. Brexit changed where UK companies pay tax and file accounts. It did not change which users their products reach.
The date that matters most right now is August 2026, roughly four months from the time of writing. From this point, the full high-risk AI system requirements apply: quality management systems, technical documentation, human oversight provisions and registration in the EU database. If your product makes decisions about people in areas the Act designates as high-risk and you do not have a compliance programme already running, the window is closing.
What risk tier your product is actually in
The Act organises AI systems into four levels. Understanding which applies to your product determines the scope of what you are required to do.
Social scoring · biometric mass surveillance · manipulative AI
Credit scoring · CV screening · clinical AI · law enforcement · benefits eligibility
Chatbots · synthetic media · automated recommendations (transparency only)
Spam filters · autocomplete · most B2B tooling
Unacceptable risk is outright prohibition. Eight specific use cases are banned entirely: mass surveillance, social scoring, subliminal manipulation and related harms. For most commercial product teams, these are clear lines rather than grey areas.
High-risk systems face the heaviest obligations. Annex III of the Act specifies the categories in detail: biometric identification and emotion recognition, critical infrastructure, education and vocational training, employment decisions, access to essential services (credit, insurance, benefits), law enforcement, migration and emergency services. If your product touches any of these areas, you are in high-risk territory regardless of how simple the component is. A rules-based credit scoring model that uses any machine learning is not exempt because it seems straightforward.
Limited risk covers systems like chatbots and synthetic media generation. The primary obligation is transparency: users must know when they are interacting with an AI system. This requires deliberate product decisions about disclosure at interaction points, not a legal notice buried in terms and conditions.
Minimal risk is everything else. Spam filters, autocomplete, basic content recommendations. No regulatory obligation beyond standard product safety law.
One important caveat on this tier framing: it is commonly depicted as a pyramid, but the Act applies stacking compliance checks rather than mutually exclusive categories. A high-risk system also faces limited-risk transparency obligations. Knowing your tier is the start of the analysis, not the end of it.
The provider trap most product teams fall into
Here is the distinction that catches product teams off guard: the Act differentiates between providers (who develop AI systems and place them on the market) and deployers (who use AI systems in a professional context under their own authority).
The assumption most teams make is that they are deployers when they are, legally, providers. If you integrate a third-party foundation model into your product and ship it under your brand, you are the provider of that downstream AI system, even though you did not train the underlying model. The obligations fall on you, not exclusively on OpenAI or Anthropic. You cannot outsource your conformity assessment to your model vendor.
For high-risk systems, those obligations include maintaining a quality management system, producing technical documentation, completing an EU conformity assessment and registering the system in the EU database. Non-compliance can result in fines up to €35 million or 7% of global annual turnover, whichever is higher. The provider/deployer distinction is not a technicality to be resolved by contracts. It determines who carries liability when something goes wrong.
For a UK product team, the practical test is straightforward: does your product have users, customers or deployers in any EU member state? If yes, you are in scope. The legal entity being in London is not a defence, and a token of EU revenue is enough to bring the obligations along.
This has a specific implication for product teams building with foundation models: you need to understand what your model provider is and is not compliant with, and you need to understand how that affects your downstream obligations. The 26 providers who signed the Code of Practice have agreed to transparency and documentation standards, but their compliance does not transfer to you.
What oblivious non-compliance looks like
The pattern I keep running into is not deliberate non-compliance. It is oblivious non-compliance.
In conversations with product and engineering leaders at UK tech companies over the past year, the same scenario plays out repeatedly. A team is using an AI tool to screen candidates or route customer support tickets. The system is making decisions that affect EU residents. Nobody in the room has connected any of it to the EU AI Act.
Under Article 4 of the Act, those are not advanced questions. They are the minimum expected of anyone operating an AI system in a professional context. That obligation has been in force since February 2025.
These are not outliers. They are representative of where most UK product teams currently sit: aware that regulation exists, confident it applies to someone else and operating AI systems affecting EU users without any of the governance the Act requires.
What this means by role
The Act does not address product roles by title, but its obligations map clearly onto different parts of the product organisation.
| Consumer SaaS | Fintech | Healthtech | HR Tech | Govtech | B2B Infra | |
|---|---|---|---|---|---|---|
| Junior PM | LowArticle 50 transparency | Moderate | Moderate | Moderate | Moderate | Low |
| Senior PM | Light | HighCredit / insurance use case review | HighMedical device classification | High | High | Light |
| Principal / Staff PM | Light | High | SevereAnnex III + post-market monitoring | High | Severe | Moderate |
| Head of Product | Moderate | Severe | Severe | Severe | Severe | Moderate |
| CPO | HighRisk ownership cascade | Severe | Severe | Severe | SevereVendor governance | High |
Junior and mid-level shipping AI features need to understand, at minimum, what risk tier their product sits in and whether it touches any of the categories. This is not specialist legal work. It is the same categorisation thinking that goes into any feature requiring a security review, and it should become a default pre-shipping question: does this use case fall under Annex III?
Senior PMs owning AI-enabled product areas need to own the documentation trail. Technical documentation requirements under the Act include system characteristics, intended purpose, accuracy and robustness metrics and data governance records. Most of this should exist as good product practice already. If it does not, the Act provides the forcing function to create it.
Heads of product and carry the broadest responsibility. The risk management system required for high-risk AI is a lifecycle process: it runs from design through deployment through post-market monitoring and incident reporting. AI compliance needs to be a standing item on the product roadmap, with a named owner and a clear governance chain. CPOs also own vendor selection: if your third-party model providers are not meeting obligations, that has downstream consequences for your own compliance posture.
UK product leaders carry an additional decision their EU counterparts do not. The UK government has taken a deliberately lighter-touch, principles-based approach to AI regulation rather than a prescriptive one. In practice, most UK companies with any EU exposure will find it simpler to build to the EU standard once and apply it everywhere, rather than maintain two governance frameworks for the same product. CPOs and heads of product should be making that call explicitly, not by default — the dual-track approach almost always costs more than picking the higher bar.
For teams in fintech, healthtech and HR tech, the urgency is sharpest. Credit scoring, hiring tools and clinical decision support systems sit squarely in Annex III. According to MIT Sloan Management Review, organisations need approximately two years to prepare adequately for the Act's requirements. August 2026 is roughly four months away.
Eleven things product teams commonly miss
1. The AI literacy obligation was already due in February 2025
Article 4 required both providers and deployers to ensure employees and operators working with AI systems have a documented, sufficient level of AI literacy. Not an optional training programme. A formal organisational obligation that should have been in place sixteen months ago. If this has not been completed, it is already overdue. Audit evidence will be expected when enforcement picks up.
2. Transparency obligations apply even at limited risk
Teams focused on Annex III tend to overlook the Act's transparency requirements, which apply to all AI systems regardless of risk tier. If your product uses a chatbot, generates synthetic content or makes automated recommendations affecting users, those interactions require clear disclosure. The obligation sits with the product, not the underlying model provider.
3. Integrating an makes you a provider
Worth stating plainly, because it catches teams out repeatedly: if you wrap a foundation model in your product and ship it to users, you are a provider of that AI system under the Act. Your model vendor's compliance framework does not transfer to you. Your conformity assessment, technical documentation and EU database registration remain your responsibility.
4. Human oversight is a staffing requirement, not a UI element
The Act requires that high-risk systems be overseen by natural persons "who have the necessary competence, training and authority". A review button in your interface does not satisfy this. You need to identify specific individuals, document their competence and training, and give them genuine authority to intervene. This has resourcing implications most product teams have not yet budgeted for.
5. Post-market monitoring is a continuous obligation, not a launch task
Compliance does not end at go-live. High-risk AI systems require ongoing post-market monitoring, performance logging and documented processes for responding to issues. If your product has shipped and you have no monitoring programme, you have a gap that runs from the day it went live.
6. A single high-risk component can pull the whole product into scope
If one feature within a broader product falls under Annex III, that component must comply with high-risk requirements. You cannot ringfence a non-compliant AI feature inside a larger compliant product. The classification follows the use case, not the architecture or how the code is deployed.
7. Serious incidents have a reporting deadline
If a high-risk AI system causes or contributes to serious harm, providers must report it to the relevant national authority without undue delay. That requires a documented incident response process to exist before something goes wrong. This is a product and engineering design requirement, not something legal can bolt on retrospectively.
8. EU database registration is a product team task, not a legal one
High-risk AI systems must be registered in the EU's public database before being placed on the market. The registration requires technical information that only product and engineering teams hold: intended purpose, capabilities, limitations, accuracy figures and training data descriptions. Legal cannot file this without detailed input from the team that built the system.
9. The open source exception applies to model providers, not product builders
Foundation models released under open licences are largely exempt from GPAI documentation requirements. That exemption applies to the model provider. If you build a product on top of an open-source model, you are still the provider of the downstream system and your compliance obligations remain unchanged.
10. Your vendor contracts probably predate the Act
Many teams assume their agreements with AI model providers transfer sufficient liability. Most do not cover EU AI Act compliance in a way that protects the downstream product builder. If your contracts were signed before 2025, they almost certainly predate the Act's obligations entirely. For UK product teams the gap usually compounds: many AI vendor contracts were negotiated with US providers under US governing law, with no mention of EU AI Act compliance at all. The problem is not just that the contract predates the Act; the contract's governing law may not even contemplate EU regulatory obligations as a category. This is worth a conversation with legal before August, not after.
11. Brexit does not mean out of scope
The EU AI Act applies to any AI system placed on the EU market or that produces outputs used in the EU, regardless of where the provider is incorporated. If your product has EU users — even a small percentage — you are a provider subject to the Act. This is the most common misconception among UK product teams and the one most likely to produce an unpleasant regulatory surprise. The threshold is not how much of your business is European; it is whether any of it is.
The EU AI Act is often framed as Europe putting a brake on innovation. It is more useful to think of it as a standard that rigorous product practice should already meet: document what you build, ensure humans can oversee it, be transparent about what AI is doing and who it acts on. The uncomfortable question for most product teams is not whether they can comply. It is how much of this they should have been doing anyway.
For UK teams, there is an added layer of complacency baked in — the assumption that leaving the EU simplified the regulatory picture. For any product with European users, it did not. The Act does not check where your company filed its last set of accounts.

