The Thinking Company

What Is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is the European Union’s comprehensive legal framework for artificial intelligence — the first of its kind globally. It classifies AI systems into four risk tiers (unacceptable, high-risk, limited risk, and minimal risk) and assigns compliance obligations proportional to each tier’s potential for harm.

High-risk AI systems — including credit scoring, hiring algorithms, and medical diagnostics — must meet strict requirements for risk management, data governance, transparency, and human oversight, with penalties reaching EUR 35 million or 7% of global annual turnover.

The Act entered into force in August 2024, with a phased implementation timeline extending through 2027. According to a KPMG survey, only 29% of European enterprises had begun formal compliance preparations by mid-2025, despite the approaching deadlines. [Source: KPMG EU AI Act Readiness Survey, 2025] For organizations deploying AI in or serving the European market, understanding the EU AI Act compliance requirements is no longer optional — it is a regulatory necessity with material financial exposure.

Why the EU AI Act Matters for Business Leaders

The EU AI Act affects any organization that develops, deploys, or imports AI systems within the European Union — regardless of where the organization is headquartered. This extraterritorial scope mirrors GDPR’s approach and means that US, UK, and Asian companies selling AI-powered products in Europe must comply.

The financial exposure is substantial. Fines for prohibited AI practices reach EUR 35 million or 7% of global turnover, whichever is higher. For high-risk AI non-compliance, penalties cap at EUR 15 million or 3% of turnover. Even limited-risk transparency violations carry fines of EUR 7.5 million or 1.5% of turnover. PwC estimates that compliance costs for organizations deploying high-risk AI will average EUR 200,000-400,000 per AI system, depending on complexity. [Source: PwC, 2025]

Beyond penalties, the Act reshapes competitive dynamics. Organizations that build compliance into their AI development process from the start — embedding risk assessment, documentation, and monitoring as standard practice — gain a structural advantage over those that retrofit compliance after deployment. This proactive approach aligns with Stage 4+ on the AI maturity model, where governance becomes a competitive enabler rather than a cost center.

The Act also establishes the global benchmark. Brazil, Canada, and several ASEAN nations have referenced the EU AI Act when drafting their own AI regulations. Gartner predicts that by 2027, 60% of countries with AI regulation will have adopted frameworks substantially influenced by the EU AI Act’s risk-based approach. [Source: Gartner, 2025]

How the EU AI Act Works: Key Components

Risk-Based Classification System

The Act’s foundation is a four-tier risk classification. Unacceptable risk AI is banned outright — this includes social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), and AI that manipulates human behavior to cause harm. High-risk AI systems must comply with the Act’s most demanding requirements; these include AI used in critical infrastructure, education, employment, credit scoring, law enforcement, and healthcare. Limited risk systems (like chatbots) must meet transparency obligations — users must know they are interacting with AI. Minimal risk AI (spam filters, AI in video games) faces no restrictions. Organizations must classify every AI system they deploy and document the rationale for that classification.

Mandatory Requirements for High-Risk AI

High-risk AI systems must satisfy six categories of requirements: (1) a risk management system maintained throughout the AI lifecycle, (2) data governance ensuring training data quality and representativeness, (3) technical documentation sufficient for conformity assessment, (4) record-keeping and logging of system operations, (5) transparency allowing users to interpret outputs, and (6) human oversight enabling operators to intervene or override. The European Commission estimates that 15% of enterprise AI systems currently deployed in Europe qualify as high-risk. [Source: European Commission Impact Assessment, 2024]

General-Purpose AI Model Obligations

The Act includes specific rules for general-purpose AI models (like GPT-4, Claude, and Gemini). Providers must publish training methodology summaries, comply with EU copyright law, and maintain technical documentation. Models classified as presenting “systemic risk” — based on compute thresholds (10^25 FLOPs) — face additional obligations including adversarial testing, incident reporting, and cybersecurity requirements.

Conformity Assessment and CE Marking

High-risk AI systems must undergo conformity assessment before market placement — either self-assessment or third-party audit, depending on the use case. Systems that pass receive CE marking, similar to physical product safety certification. Biometric identification and critical infrastructure AI require third-party assessment; most other high-risk categories allow self-assessment against harmonized standards.

Implementation Timeline

The Act follows a phased rollout. Prohibitions on unacceptable-risk AI took effect in February 2025. General-purpose AI model rules and governance structures apply from August 2025. Full high-risk AI compliance obligations become enforceable in August 2026, with certain narrow exceptions extending to August 2027.

EU AI Act in Practice: Real-World Applications

  • Booking.com (Travel/Technology): Booking.com initiated an EU AI Act compliance program in early 2025, classifying over 200 AI systems across its platform. The company identified 18 high-risk systems — primarily in pricing algorithms and customer profiling — and allocated EUR 4 million to bring them into compliance, including implementing human oversight mechanisms and bias auditing for recommendation systems.

  • Deutsche Bank (Financial Services): Deutsche Bank created a dedicated AI Regulation Compliance unit in 2025 with 25 staff members focused exclusively on EU AI Act preparedness. The bank classified its credit scoring and anti-money laundering AI systems as high-risk and began implementing conformity documentation, model transparency reports, and enhanced data governance protocols 18 months ahead of the August 2026 enforcement deadline.

  • Philips (Healthcare Technology): Philips mapped its medical AI portfolio against the EU AI Act requirements alongside existing Medical Device Regulation (MDR) compliance. The company found that 80% of MDR documentation requirements overlapped with EU AI Act high-risk requirements, reducing the incremental compliance burden. Philips published this mapping methodology as a white paper, positioning itself as a compliance leader in health AI.

How to Get Started with EU AI Act Compliance

  1. Inventory and classify your AI systems: Create a comprehensive register of every AI system your organization develops, deploys, or procures. Classify each system against the Act’s four risk tiers. Pay particular attention to HR, financial, and customer-facing AI — these most commonly fall into the high-risk category.

  2. Conduct a gap assessment: For each high-risk system, evaluate current documentation, risk management, data governance, transparency, and human oversight against the Act’s requirements. Most organizations discover significant gaps in technical documentation and logging.

  3. Build compliance into your AI development lifecycle: Embed EU AI Act requirements into your AI strategy and development process from the design phase rather than retrofitting after deployment. This includes ethical review, AI ethics assessment, and conformity documentation as standard project deliverables.

  4. Address shadow AI exposure: Unmanaged shadow AI usage may include high-risk applications — employees using AI for hiring decisions or customer profiling without organizational awareness. Establish acceptable use policies and governance frameworks that account for both sanctioned and unsanctioned AI use.

At The Thinking Company, we help organizations navigate EU AI Act compliance as part of our AI Governance engagements (EUR 10-15K). We conduct AI system classification, gap assessments, and build compliance roadmaps aligned with the Act’s phased enforcement timeline.


Frequently Asked Questions

Does the EU AI Act apply to companies outside Europe?

Yes. The Act applies to any organization that places AI systems on the EU market or whose AI outputs are used within the EU — regardless of where the company is headquartered. A US-based SaaS company whose AI-powered product serves European customers must comply. This extraterritorial reach mirrors GDPR and means that global companies must factor EU AI Act compliance into their AI deployment strategy for any product that touches the European market.

What is the difference between the EU AI Act and GDPR?

GDPR regulates personal data processing; the EU AI Act regulates AI systems broadly, including those that do not process personal data. They overlap when AI systems handle personal data (most enterprise AI does), in which case both regulations apply simultaneously. The EU AI Act adds requirements beyond GDPR: risk classification, conformity assessment, transparency about AI interaction, and technical documentation standards. Organizations need compliance programs that address both regulations in an integrated way.

When do EU AI Act obligations actually start?

The timeline is phased. Prohibitions on unacceptable-risk AI (social scoring, manipulative AI) took effect February 2025. General-purpose AI model rules apply from August 2025. The full set of high-risk AI obligations — risk management, data governance, transparency, human oversight, and conformity assessment — become enforceable August 2026. Some specific provisions for AI embedded in regulated products (medical devices, vehicles) have an extended deadline of August 2027.


Last updated 2026-03-11. For a detailed compliance roadmap and implementation guidance, see our EU AI Act Compliance pillar page.