The Thinking Company

EU AI Act Compliance for Enterprise: What You Must Do Before August 2026

EU AI Act compliance requires enterprises to classify every AI system by risk level, fulfill obligations tied to that classification, and prove compliance through documentation, conformity assessments, and ongoing monitoring. The regulation (EU 2024/1689) entered into force in August 2024, with enforcement phased through August 2027. The critical deadline for most enterprises is August 2, 2026, when high-risk AI rules become enforceable — with penalties reaching EUR 35 million or 7% of global turnover. [Source: EU AI Act, Article 99]

That deadline is not distant. Conformity assessments alone take 6—12 months. If your organization has not started its AI inventory, risk classification, and governance build-out, you are already behind schedule.

An appliedAI study of 106 enterprise AI systems found that only 18% were clearly classified as high-risk, 42% were low-risk, and 40% had unclear classification — meaning nearly half of enterprise AI systems sit in a gray zone that requires immediate analysis. [Source: appliedAI, “AI Act Risk Classification Study,” 2025] Over half of organizations still lack systematic inventories of AI systems in production or development. [Source: Complianceandrisks.com, “EU AI Act Compliance Requirements,” 2026]

The financial exposure is real. Large enterprises with high-risk AI systems should budget $8—15 million for initial compliance, with $2—5 million in annual ongoing costs. Mid-market companies face $2—5 million initially and $500K—2M annually. [Source: LegalNodes, “EU AI Act 2026 Updates,” 2026] These are substantial numbers, but they are dwarfed by the penalty exposure and the reputational damage of non-compliance in a market where AI trust is becoming a competitive differentiator.

This guide covers what the EU AI Act requires in practice, what most enterprises get wrong, and how to build a compliance program that does not strangle innovation.

How the EU AI Act Risk Classification Works

The EU AI Act organizes all AI systems into four risk tiers. Your obligations depend entirely on where your systems land. Getting the classification right is the single most important step in the compliance process — and the step where most enterprises stumble.

Unacceptable Risk: Banned Outright

Article 5 of the AI Act prohibits AI practices that pose fundamental threats to safety, rights, and democratic values. These bans became enforceable on February 2, 2025, making them the first EU AI Act provisions with teeth. [Source: EU AI Act, Article 5; Official Journal of the European Union, 2024]

Prohibited practices include:

  • Social scoring by public authorities or on their behalf
  • Subliminal manipulation using techniques that exploit vulnerabilities to distort behavior in ways likely to cause harm
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for specific threats)
  • Emotion recognition in workplaces and educational institutions
  • Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
  • Predictive policing based solely on profiling or personality traits

For most enterprises, the prohibited category triggers a straightforward audit question: “Do any of our AI systems do these things?” If the answer is yes, you must decommission them. The penalty tier for prohibited practices is the highest: up to EUR 35 million or 7% of total worldwide annual turnover, whichever is greater. [Source: EU AI Act, Article 99(3)]

No enforcement actions for prohibited practices have been publicly announced as of early 2026, but Finland activated national supervision laws on January 1, 2026, becoming the first EU member state with fully operational AI Act enforcement powers. Other member states are expected to follow throughout Q1—Q2 2026. [Source: K&L Gates, “EU and Luxembourg Update on AI,” January 2026]

High-Risk: The Category That Affects Most Enterprises

High-risk AI systems face the heaviest compliance obligations. Systems qualify as high-risk through two pathways:

Pathway 1 — Safety components in regulated products (Annex II). AI used as a safety component in products already covered by EU harmonization legislation — medical devices (MDR), machinery, toys, lifts, civil aviation, motor vehicles, and similar regulated products. These must meet both the AI Act requirements and the relevant sectoral legislation.

Pathway 2 — Annex III standalone high-risk systems. AI systems used in specific domains where the potential impact on fundamental rights is significant. This is the pathway that catches most enterprises.

Annex III High-Risk Categories With Business Examples

Annex III lists eight domains. Here is what they mean for enterprise AI in practice:

Annex III DomainEnterprise Examples
1. BiometricsFacial recognition for building access, remote identity verification for KYC onboarding, voice biometrics in call centers
2. Critical infrastructureAI managing energy grids, water treatment optimization, traffic flow systems, telecom network management
3. Education and vocational trainingAutomated exam grading, student performance prediction, admissions scoring, adaptive learning platforms
4. Employment and workers managementCV screening tools, interview scoring, performance evaluation algorithms, automated scheduling based on worker profiling
5. Access to essential servicesCredit scoring, insurance risk assessment, loan approval automation, social benefit eligibility determination
6. Law enforcementPolygraph-adjacent tools, evidence analysis, crime prediction (where not outright prohibited), facial recognition under authorized exceptions
7. Migration, asylum, border controlVisa application processing, risk assessment for travelers, document authentication
8. Administration of justiceLegal research AI used in judicial settings, case outcome prediction used by courts

[Source: EU AI Act, Annex III; artificialintelligenceact.eu]

For a typical mid-market enterprise, categories 1, 4, and 5 are the most likely triggers. If you use AI for hiring decisions, credit assessments, insurance pricing, or biometric identification, you almost certainly operate high-risk AI systems.

A critical nuance: Article 6(3) provides an exemption where an AI system in an Annex III category is not treated as high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. This exemption requires the system to not make decisions that are merely “preparatory” to a human decision, among other conditions. The burden of documenting why the exemption applies falls on the provider. [Source: EU AI Act, Article 6(3)]

Limited Risk: Transparency Obligations

Limited-risk AI systems face specific transparency requirements under Article 50, enforceable from August 2, 2026. These include:

  • Chatbots and conversational AI: Must disclose to users that they are interacting with an AI system
  • Deepfakes and synthetic content: Must be labeled as artificially generated or manipulated
  • Emotion recognition and biometric categorization: Users must be informed when such systems are operating

If your enterprise uses customer-facing chatbots, AI-generated marketing content, or synthetic media, these transparency obligations apply regardless of whether the system is classified as high-risk. [Source: EU AI Act, Article 50]

Minimal Risk: Voluntary Codes of Conduct

AI systems that fall outside the above categories — recommendation engines, spam filters, AI-powered search, most internal productivity tools — are not regulated under the AI Act. The regulation encourages providers of minimal-risk systems to voluntarily apply codes of conduct aligned with high-risk requirements, but this is not mandatory.

The practical danger: assuming a system is minimal-risk without rigorous classification. That 40% “unclear” figure from the appliedAI study reflects how easy it is to misclassify. A recommendation engine is minimal-risk. A recommendation engine that influences credit offers shown to consumers may be high-risk.

What Enterprises Must Do: Provider vs. Deployer Obligations

The EU AI Act distinguishes between providers (those who develop or place AI systems on the market) and deployers (those who use AI systems in their professional activities). This distinction determines which obligations apply to your organization. Most enterprises are deployers — they buy or license AI systems and use them in business operations. But the line is not always clean.

When You Are a Deployer

If your organization uses a third-party AI system (a vendor’s credit scoring model, an off-the-shelf hiring tool, a cloud-based fraud detection platform), you are a deployer. Deployer obligations under Article 26 include:

  • Use the system according to instructions. Follow the provider’s instructions for use. This sounds simple, but in practice means you cannot repurpose an AI tool outside its intended scope without potentially reclassifying your role as a provider.
  • Human oversight. Assign natural persons with the competence, training, and authority to oversee high-risk AI systems. These cannot be token appointments — the individuals must have genuine ability to intervene in or override the system’s outputs.
  • Input data quality. Ensure that input data is relevant and sufficiently representative for the system’s intended purpose.
  • Log retention. Keep automatically generated logs for at least six months, or longer if required by sector-specific regulation.
  • Fundamental rights impact assessment (FRIA). For deployers that are public bodies or private entities providing public services, conduct an FRIA before deploying high-risk AI systems.
  • Inform affected individuals. When high-risk AI system outputs affect natural persons, inform them that they are subject to AI-assisted decision-making.

[Source: EU AI Act, Article 26; artificialintelligenceact.eu]

When You Are a Provider

If your organization develops AI systems, fine-tunes foundation models for commercial deployment, or puts its own name on a white-labeled AI product, you are a provider. Provider obligations under Article 16 are more extensive:

  • Risk management system (Article 9): Establish and maintain a documented risk management process throughout the AI system’s lifecycle
  • Data governance (Article 10): Ensure training, validation, and testing datasets meet quality criteria
  • Technical documentation (Article 11): Maintain documentation demonstrating compliance
  • Record-keeping and logging (Article 12): Build automatic logging capabilities into the system
  • Transparency and information provision (Article 13): Provide clear instructions for use to deployers
  • Human oversight by design (Article 14): Design the system to enable effective human oversight
  • Accuracy, robustness, and cybersecurity (Article 15): Meet technical performance standards
  • Quality management system (Article 17): Implement a documented QMS covering the entire AI lifecycle
  • Conformity assessment (Article 43): Complete the relevant conformity assessment procedure before placing the system on the market
  • CE marking (Article 48): Affix CE marking to compliant high-risk systems
  • EU database registration (Article 49): Register high-risk AI systems in the EU database
  • Post-market monitoring (Article 72): Operate a system for ongoing monitoring after deployment

[Source: EU AI Act, Articles 9—17, 43, 48, 49, 72]

The Gray Zone: When Deployers Become Providers

Article 25 addresses a scenario many enterprises overlook. You are reclassified from deployer to provider if you:

  • Put your own name or trademark on a high-risk AI system already on the market
  • Make a substantial modification to a high-risk AI system
  • Modify the intended purpose of an AI system such that it becomes high-risk

This matters because enterprises routinely customize vendor AI systems — fine-tuning models on proprietary data, adjusting decision thresholds, or integrating AI outputs into broader automated workflows. Any of these modifications could trigger reclassification, shifting the full weight of provider obligations onto your organization.

Build governance processes that flag when customization crosses the line from deployer use to provider responsibility. TTC’s AI governance framework includes risk classification protocols specifically designed to catch this boundary.

The Enforcement Timeline: What Is Live and What Is Coming

The EU AI Act enforcement follows a phased schedule. Understanding which obligations are already enforceable versus which are approaching helps prioritize compliance work.

DateWhat Becomes Enforceable
February 2, 2025Prohibited AI practices (Article 5); AI literacy obligations for all providers and deployers (Article 4)
August 2, 2025General-purpose AI model obligations (Chapter V); governance structures (AI Office, AI Board, national authorities); Member States designate competent authorities and adopt penalty laws
August 2, 2026Annex III high-risk AI system requirements; transparency obligations (Article 50); innovation measures; regulatory sandboxes required (at least one per Member State)
August 2, 2027Full scope applies to all categories including Annex II high-risk systems (safety components of regulated products); GPAI models placed on market before August 2025 must comply

[Source: EU AI Act, Articles 111—113; AI Act Service Desk, European Commission, 2025]

The European Commission proposed a “Digital Omnibus” package in November 2025 that could extend certain Annex III deadlines to December 2027. As of March 2026, this proposal has not been finalized. Prudent compliance planning treats August 2, 2026 as the binding date. Organizations that plan around a potential extension and find it does not materialize will face a compressed timeline with no room for error.

A practical note on AI literacy (Article 4): this obligation is already enforceable. Every provider and deployer must ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. If your organization has not implemented AI literacy training, you are already non-compliant. See TTC’s AI adoption roadmap for a structured approach to building organizational AI capability.

Where the EU AI Act Meets GDPR

The EU AI Act does not replace GDPR — it layers on top of it. Organizations deploying AI systems that process personal data must comply with both regulations simultaneously. The overlap creates compounding obligations that require integrated compliance programs.

Article 22 GDPR: Automated Decision-Making

GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. The AI Act’s human oversight requirements for high-risk systems (Article 14) reinforce and extend this principle. Where GDPR requires a right to opt out, the AI Act requires the system itself to be designed for meaningful human intervention. [Source: DLA Piper, “The EU AI Act’s Relationship with Data Protection Law,” April 2024]

In practice, this means:

  • AI systems making decisions about credit, employment, insurance, or benefits must allow for human review — both as a GDPR right and as an AI Act design requirement
  • The “meaningful human oversight” standard under the AI Act is arguably more demanding than GDPR’s Article 22, because it requires the system architecture to support intervention, not just offer an appeal process after the fact
  • Organizations must document both their GDPR lawful basis for processing and their AI Act compliance measures in parallel

Data Protection Impact Assessments and Fundamental Rights Impact Assessments

GDPR Article 35 requires Data Protection Impact Assessments (DPIAs) for high-risk data processing. The AI Act introduces Fundamental Rights Impact Assessments (FRIAs) for certain deployers of high-risk AI systems. The AI Act explicitly states that FRIAs should complement DPIAs, with the DPIA conducted first, then expanded to address broader fundamental rights dimensions. [Source: EU AI Act, Article 27; TechPolicy.Press, 2025]

Organizations already running DPIAs for AI-related processing have a head start. The DPIA methodology provides the procedural template for the FRIA — systematic risk evaluation, documentation of mitigation measures, and ongoing monitoring. Do not build these as separate processes. Integrate them into a single assessment workflow, saving resources and avoiding contradictory conclusions.

GDPR enforcement on AI-related processing is already active. Cumulative GDPR fines have reached EUR 7.1 billion since May 2018, with over 2,560 individual fines recorded. [Source: GDPR Enforcement Tracker, enforcementtracker.com, March 2025] Clearview AI’s EUR 30.5 million fine from the Dutch DPA for scraping facial images without consent demonstrates that AI-specific GDPR enforcement is not hypothetical. [Source: Dutch DPA, Clearview AI Decision, 2025] The EDPB has designated transparency as its 2026 coordinated enforcement theme, with parallel investigations across national DPAs into how organizations communicate data processing practices — directly relevant to AI transparency obligations. [Source: EDPB, 2026 Enforcement Priorities]

Sector-Specific Overlays: Financial Services, Healthcare, General Enterprise

The EU AI Act does not exist in isolation. Sector-specific regulations create additional layers that compound compliance requirements. Three sectors deserve specific attention.

Financial Services: AI Act + DORA + MiFID II

Financial institutions face the densest regulatory stack. The Digital Operational Resilience Act (DORA), enforceable since January 2025, governs ICT risk management, incident reporting, and digital resilience testing. Its requirements overlap with and complement the AI Act’s risk management obligations. [Source: Pinsent Masons, “Financial Services Compliance with the EU AI Act and DORA,” 2025]

Practical implications for financial services:

  • Credit scoring AI is explicitly high-risk under Annex III, category 5(b). Conformity assessments, technical documentation, and ongoing monitoring are mandatory by August 2026.
  • DORA’s ICT risk management framework and the AI Act’s risk management system (Article 9) share underlying concepts — documented risk identification, mitigation measures, continuous monitoring. Build one integrated risk management system, not two parallel ones.
  • Anti-money laundering AI, fraud detection systems, and algorithmic trading tools each require separate classification analysis under the AI Act, cross-referenced against MiFID II, PSD2, and national supervisory authority guidance.
  • National financial regulators (e.g., Poland’s KNF, Germany’s BaFin) are expected to issue sector-specific AI guidance that supplements the AI Act. Monitor these developments.

For an AI readiness assessment tailored to the financial services regulatory landscape, TTC’s eight-dimension scoring model includes a dedicated governance and regulatory dimension.

Healthcare: AI Act + MDR/IVDR

AI in medical devices triggers dual regulation. The Medical Devices Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) already impose stringent requirements on software classified as a medical device. AI systems that qualify as medical devices are also high-risk under the AI Act’s Annex II pathway. [Source: PMC/NIH, “Navigating the EU AI Act: Implications for Regulated Digital Medical Products,” 2024]

Key points:

  • Medical device AI systems have an extended transition period — compliance with AI Act requirements is required by August 2, 2027 (one year later than Annex III systems). [Source: EU AI Act, Article 113]
  • The conformity assessment under MDR (involving Notified Bodies) and the conformity assessment under the AI Act must be coordinated. Where possible, conduct them as a single integrated process.
  • Clinical decision support tools, diagnostic AI, and patient monitoring systems all require classification under both the MDR risk framework and the AI Act risk classification. These may yield different risk levels — the higher classification applies.

General Enterprise: The Most Common AI Systems

For enterprises outside financial services and healthcare, the most common high-risk triggers are:

  • HR and recruitment AI (Annex III, category 4): CV screening, interview analysis, performance prediction, automated scheduling based on worker profiling. If your HR technology stack includes AI features, classify each one.
  • Access to essential services (Annex III, category 5): Any AI system determining eligibility for or terms of contracts, insurance, utilities, or financial products offered to consumers.
  • Biometric systems (Annex III, category 1): Facial recognition for access control, voice biometrics in customer service, behavioral biometrics for fraud prevention.

Enterprise AI that falls outside high-risk categories — internal productivity tools, code assistants, content generation, data analysis dashboards — still faces the AI literacy obligation (Article 4, enforceable now) and transparency requirements for chatbots and synthetic content (Article 50, enforceable August 2026).

Build your AI governance framework to cover the full spectrum, from minimal-risk tools that need only policy coverage to high-risk systems that require full conformity assessment.

Conformity Assessment: What It Actually Involves

For high-risk AI systems, conformity assessment is the regulatory gateway to market access. No conformity assessment means no CE marking, no EU database registration, and no legal deployment after August 2026.

Two Assessment Routes

Internal conformity assessment (Annex VI). Most Annex III high-risk AI systems use this route. The provider conducts a self-assessment, verifying compliance with all Chapter III requirements and documenting the evidence. This is not a rubber-stamp exercise — it requires comprehensive technical documentation, a functioning quality management system, and verifiable test results.

Third-party conformity assessment (Annex VII). Required for certain biometric systems and any system where sector-specific legislation mandates third-party involvement. A Notified Body conducts the assessment and issues a certificate.

What the Assessment Covers

A conformity assessment evaluates compliance across all Article 8—15 requirements:

  1. Risk management (Article 9): Is there a documented, continuous risk management process? Are residual risks acceptable?
  2. Data governance (Article 10): Do training and testing datasets meet quality, relevance, and representativeness criteria?
  3. Technical documentation (Article 11): Does documentation enable authorities to assess compliance? Is it kept up to date?
  4. Logging (Article 12): Does the system automatically log events to the degree needed for traceability?
  5. Transparency (Article 13): Are instructions for use complete and accessible to deployers?
  6. Human oversight (Article 14): Is the system designed to be effectively overseen by natural persons?
  7. Accuracy and robustness (Article 15): Does the system meet defined accuracy levels? Is it resilient to adversarial inputs and errors?

Organizations report that conformity assessment takes 6—12 months from initiation to completion, including documentation preparation, testing, and remediation of gaps. [Source: SecurePrivacy.ai, “EU AI Act 2026 Compliance Guide,” 2026] Third-party audits cost $200K—800K annually for compliance verification. [Source: LegalNodes, “EU AI Act 2026 Updates,” 2026]

A Practical Compliance Roadmap: What to Do First, Second, Third

Compliance programs that try to do everything simultaneously fail. This roadmap sequences activities by dependency and deadline urgency.

Phase 1: Inventory and Classify (Months 1—3)

Start here, regardless of maturity level.

  1. Build an AI system inventory. Catalog every AI system in production, development, and procurement. Include vendor AI embedded in SaaS tools — many organizations discover AI systems they did not know they were using. Over half of organizations still lack this inventory. [Source: Complianceandrisks.com, 2026]
  2. Classify each system by risk tier. Apply the Article 6 classification rules and Annex III categories. For the 40% of systems that fall into gray zones, document your reasoning and seek legal counsel.
  3. Map provider vs. deployer status. For each high-risk system, determine whether your organization is the provider, deployer, or has triggered reclassification through customization (Article 25).
  4. Verify prohibited practices compliance. Confirm that no AI system in your portfolio falls under Article 5 prohibitions. This should already be done — the ban has been enforceable since February 2025.

Use TTC’s AI maturity model to benchmark your organization’s current governance capabilities against what the AI Act requires.

Phase 2: Governance and Documentation (Months 3—8)

  1. Establish governance structures. If you do not have an AI governance framework, build one. The AI Act’s requirements for risk management systems, quality management systems, and human oversight presuppose organizational structures that assign accountability. A governance framework with defined roles, decision rights, and escalation paths is not optional — it is the infrastructure that makes compliance possible.
  2. Develop technical documentation. For each high-risk system, create or verify documentation covering design specifications, training data characteristics, testing results, performance metrics, and known limitations.
  3. Implement human oversight mechanisms. Identify and train the natural persons who will oversee high-risk systems. Document their competence, authority, and access to intervention capabilities.
  4. Integrate DPIA and FRIA processes. Build a single assessment workflow that satisfies both GDPR and AI Act impact assessment requirements.
  5. Establish logging and record-keeping. Verify that high-risk systems generate and retain the logs required by Article 12, with retention periods meeting both the AI Act six-month minimum and any sector-specific requirements.

Phase 3: Assessment and Registration (Months 8—12)

  1. Conduct conformity assessments. For systems requiring internal assessment, complete the self-assessment against all Chapter III requirements. For systems requiring third-party assessment, engage a Notified Body.
  2. Affix CE marking and register. Upon successful conformity assessment, affix CE marking and register high-risk systems in the EU database.
  3. Activate post-market monitoring. Establish ongoing monitoring systems that will detect performance degradation, emerging risks, and compliance drift after deployment.

Phase 4: Ongoing Compliance (Continuous)

  1. Monitor regulatory developments. The AI Act’s implementing acts, delegated acts, and harmonized standards are still being finalized. National competent authorities will issue guidance. Regulatory sandboxes will produce insights. Stay current.
  2. Conduct periodic reviews. Re-classify systems when their use changes. Update documentation when systems are modified. Re-run conformity assessments when substantial modifications occur.
  3. Train continuously. The AI literacy obligation (Article 4) is not a one-time training event. As AI systems evolve and new ones are deployed, training must keep pace.

For a structured approach to organizational change management during compliance programs, see TTC’s change management framework.

How Governance Frameworks Map to EU AI Act Requirements

Organizations that already have an AI governance framework are not starting from zero. Many governance capabilities map directly to AI Act requirements. Understanding the mapping helps quantify existing coverage and identify gaps.

AI Act RequirementGovernance Framework ElementGap Risk
Risk management system (Art. 9)Risk classification and assessment processesOften exists but not documented to Art. 9 specificity
Data governance (Art. 10)Data quality standards, data catalogGovernance frameworks rarely cover training data requirements
Technical documentation (Art. 11)Model documentation, decision logsExisting docs may lack regulatory detail
Human oversight (Art. 14)Ethics Board review, oversight rolesMay be advisory rather than operational as Art. 14 requires
Quality management system (Art. 17)Process standards, CoE operating modelGovernance frameworks may not formalize as a “QMS”
Post-market monitoring (Art. 72)Model monitoring, drift detectionOften technical, not structured for regulatory reporting
Board-level governanceBoard AI governance maturityFrequently absent — boards may not have AI oversight structures

The most common gap: governance frameworks built for operational effectiveness do not automatically satisfy the documentation and procedural specificity the AI Act demands. A working risk classification process is not the same as a documented risk management system that an authority can audit. The substance may exist, but the form needs work.

TTC’s governance framework was designed with EU AI Act alignment built in — its risk classification tiers mirror the AI Act’s risk categories, its documentation requirements anticipate conformity assessment needs, and its role definitions map to the AI Act’s accountability expectations. For organizations building governance from scratch, starting with an AI Act-aligned framework avoids the cost of retrofitting later.

Common Compliance Mistakes

Five patterns recur across enterprise AI Act compliance programs. Each one creates regulatory exposure, wastes resources, or both.

1. Treating compliance as a legal project. The AI Act’s requirements are deeply technical — risk management systems, data governance, conformity assessments, logging capabilities. Legal teams can interpret the regulation but cannot implement the technical and organizational measures it requires. Compliance programs need cross-functional teams spanning legal, technology, data science, operations, and business leadership.

2. Ignoring the deployer obligations. Many enterprises assume that because they buy rather than build AI, compliance is the vendor’s problem. It is not. Deployers have independent obligations under Article 26 — human oversight, input data quality, log retention, and informing affected individuals. Vendor compliance does not equal your compliance.

3. Classifying by hope rather than analysis. Organizations wanting to avoid high-risk obligations sometimes apply favorable classifications without rigorous analysis. The Article 6(3) exemption exists, but invoking it requires documented evidence that the system does not pose significant risk. Regulators will scrutinize optimistic classifications, and the burden of proof is on the provider.

4. Building parallel compliance silos. Organizations that run separate programs for GDPR, AI Act, DORA, and sector-specific regulation end up with duplicated effort, inconsistent risk assessments, and compliance teams that do not communicate. Integrate compliance programs around shared infrastructure: one risk management system, one documentation repository, one governance structure. TTC’s AI ROI calculator can help quantify the efficiency gains from integrated versus siloed compliance approaches.

5. Waiting for final standards. The European Commission is still developing harmonized standards and implementing guidance. Some organizations use this as justification to delay compliance work. This is a mistake. The regulation’s core requirements are clear in the text. Harmonized standards will provide detailed technical specifications, but the obligations they detail are already defined. Start with the regulation, refine as standards emerge.

Penalties: What Non-Compliance Actually Costs

The EU AI Act establishes a three-tier penalty structure that scales with violation severity:

Violation TypeMaximum Penalty
Prohibited AI practices (Article 5)EUR 35 million or 7% of global annual turnover
High-risk system requirement violationsEUR 15 million or 3% of global annual turnover
Incorrect information to authoritiesEUR 7.5 million or 1% of global annual turnover

[Source: EU AI Act, Article 99]

For SMEs and startups, reduced caps apply. But for mid-market and large enterprises, these penalties exceed GDPR fine levels for the most severe violations. The 7% turnover threshold for prohibited practices is the highest percentage-based fine in EU digital regulation.

Enforcement is currently fragmented. The European AI Office became operational in August 2025, and national competent authorities are being designated across Member States. Finland became the first country with fully operational AI Act enforcement in January 2026. [Source: K&L Gates, January 2026] The enforcement infrastructure is building — the question is not whether enforcement will occur, but when and where first.

Beyond regulatory penalties, non-compliance creates secondary costs: loss of market access (you cannot legally deploy non-compliant high-risk AI in the EU), contract liability (B2B customers will increasingly require AI Act compliance as a procurement condition), and reputational damage in a market where AI trust is becoming a selection criterion.

Frequently Asked Questions

Does the EU AI Act apply to companies outside the European Union?

Yes. The AI Act applies to providers placing AI systems on the EU market and to deployers located within the EU, regardless of where the provider is established. It also applies to providers and deployers located outside the EU where the output of their AI system is used within the EU. This extraterritorial scope mirrors the GDPR’s approach. If your AI system affects people in the EU, the regulation applies. [Source: EU AI Act, Article 2]

What qualifies as an “AI system” under the regulation?

The AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This is a broad definition that captures most modern machine learning, deep learning, and generative AI systems. Simple rule-based automation typically falls outside the definition. [Source: EU AI Act, Article 3(1)]

We only use vendor AI tools. Do we still need to comply?

Yes. Deployers have independent obligations under Article 26, including human oversight, input data quality, log retention, and informing individuals affected by AI decisions. Vendor compliance addresses provider obligations — it does not satisfy your deployer obligations. You also need to verify that your vendors are meeting their provider obligations, as deploying a non-compliant high-risk system exposes you to liability.

How do we know if our AI system is “high-risk”?

Apply the two-pathway classification from Article 6. First, check if the AI is a safety component of a product covered by Annex II EU harmonization legislation. Second, check if the AI system falls within any of the eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). If either pathway applies, the system is presumed high-risk unless you can demonstrate the Article 6(3) exemption. When in doubt, classify conservatively — the cost of over-compliance is lower than the cost of under-classification.

Can we delay compliance because harmonized standards are not finalized?

No. The regulation’s requirements are binding regardless of whether harmonized standards have been published. Harmonized standards provide a “presumption of conformity” — meaning that following them is a safe harbor for demonstrating compliance. But the absence of harmonized standards does not delay the obligations. You can comply by demonstrating adherence to the regulation’s requirements through other means, which may require more effort to document.

What is the difference between the AI Act conformity assessment and ISO 42001?

ISO/IEC 42001 (AI Management System) is a voluntary international standard for AI governance. Achieving ISO 42001 certification demonstrates good AI management practices but does not constitute conformity assessment under the AI Act. However, ISO 42001 implementation covers significant ground toward AI Act compliance — particularly around risk management, documentation, and quality management systems. Organizations with ISO 42001 certification will have less work to reach AI Act conformity. The European Commission may recognize ISO 42001 (or parts of it) as a harmonized standard, which would create a direct presumption of conformity for the covered requirements.

What should boards prioritize for AI Act oversight?

Board-level AI governance is not optional — the AI Act’s obligations ultimately flow to senior leadership accountability. Boards should ensure the organization has a complete AI system inventory, a risk classification methodology, a governance structure with defined accountability, and a funded compliance program with a clear timeline. TTC’s board AI governance maturity model provides a five-stage assessment framework for evaluating whether the board is providing adequate AI oversight. Most mid-market boards need to reach at least Stage 3 (compliance-oriented) to meet basic AI Act requirements, with Stage 4 (strategic) as the target for organizations with significant AI exposure.

What Comes Next

EU AI Act compliance is not a project with a finish line. The regulation establishes an ongoing compliance regime — continuous monitoring, periodic re-assessment, documentation maintenance, and adaptation to evolving guidance and standards. Organizations that treat August 2026 as the end point will fall out of compliance within months.

The organizations that handle this best share two characteristics: they have governance frameworks that were designed for regulatory compliance from the start, and they treat compliance as an integrated business function rather than a standalone legal exercise.

If your organization needs to build AI governance infrastructure that satisfies the EU AI Act while enabling — rather than blocking — AI innovation, TTC’s AI Governance and Risk Framework (EUR 10—15K) provides the foundational structures: risk classification aligned to the AI Act’s risk tiers, documentation templates mapped to conformity assessment requirements, and role definitions that satisfy accountability expectations. For organizations undertaking broader AI transformation that includes compliance build-out as one workstream, the AI Transformation Sprint (EUR 50—80K, 4—6 weeks) addresses governance alongside strategy, operations, and change management in a single engagement.

Start with a governance assessment to identify your compliance gaps and prioritize the work that matters most before August 2026.