← Back to Resources
Compliance Guides7 min read

EU AI Act Compliance Checklist for 2026: What Your Company Must Do Now

A comprehensive, actionable checklist covering every EU AI Act requirement your company must meet before enforcement deadlines hit. Covers risk classification, documentation, transparency obligations, and conformity assessments.

AI Comply HQ Team·

Why You Need This Checklist Now

The EU AI Act is no longer a proposal -it's law. With enforcement deadlines staggered from February 2025 through August 2027, companies deploying or developing AI systems in the EU market must act now to avoid penalties of up to €35 million or 7% of global annual turnover.

This checklist distills the regulation's 180+ pages into concrete, actionable steps organized by risk tier. Whether you're a startup deploying a single chatbot or an enterprise running dozens of AI-powered systems, this guide tells you exactly what to do.

Step 1: Classify Your AI Systems by Risk Tier

The EU AI Act uses a risk-based approach with four tiers. Your compliance obligations depend entirely on which tier your AI system falls into.

Unacceptable Risk (Prohibited)

These AI practices are banned outright from February 2, 2025:

  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement)
  • Emotion recognition in workplaces and educational institutions
  • AI systems that exploit vulnerabilities of specific groups (age, disability)
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
  • Predictive policing based solely on profiling

Action item: Audit every AI system you operate. If any fall into this category, decommission immediately.

High Risk

These systems require the heaviest compliance burden:

  • AI used in recruitment, employee management, or access to employment
  • AI in education for determining access to institutions or evaluating students
  • AI in critical infrastructure (energy, transport, water, digital infrastructure)
  • AI in law enforcement, border control, or administration of justice
  • AI for creditworthiness assessment or insurance pricing
  • AI used as safety components in regulated products (medical devices, vehicles, machinery)

Limited Risk

These systems have transparency obligations only:

  • Chatbots and conversational AI (must disclose AI interaction)
  • Emotion recognition systems (where not prohibited)
  • Deep fake generators (must label generated content)
  • AI systems generating synthetic content

Minimal Risk

Free to operate with no specific obligations, but voluntary codes of conduct are encouraged:

  • AI-powered video games
  • Spam filters
  • Inventory management systems

Step 2: Complete the Documentation Requirements

For high-risk AI systems, you must create and maintain:

| Document | What It Contains | Deadline | |----------|-----------------|----------| | Technical Documentation | System architecture, training data, performance metrics, testing methodology | Before placing on market | | Risk Management System | Identified risks, mitigation measures, residual risk assessment | Ongoing | | Data Governance Records | Training data sources, data preparation, bias detection, data quality metrics | Before placing on market | | Quality Management System | Development processes, QMS procedures, change management, staff competencies | Before placing on market | | Conformity Assessment | Self-assessment or third-party audit depending on category | Before placing on market | | EU Declaration of Conformity | Formal declaration that the system meets all applicable requirements | Before placing on market | | Instructions for Use | Intended purpose, limitations, human oversight requirements, performance metrics | Before placing on market |

Step 3: Implement Transparency Requirements

All AI systems must comply with Article 50 transparency obligations:

  • AI Interaction Disclosure: Users must be informed when they are interacting with an AI system (Article 50(1))
  • Synthetic Content Labeling: AI-generated text, audio, image, or video content must be machine-readable as AI-generated
  • Deep Fake Disclosure: Content depicting real people or events must be disclosed as artificially generated or manipulated
  • Emotion Recognition Notice: If your system detects emotions, inform the exposed person

Implementation Checklist

  • [ ] Add visible "AI-powered" or "You are interacting with an AI" disclosure on all user-facing AI interfaces
  • [ ] Implement C2PA or similar metadata standards for AI-generated content
  • [ ] Add disclosure banners on any content that uses deepfake technology
  • [ ] Create privacy notices for any emotion recognition features

Step 4: Establish Human Oversight Mechanisms

High-risk AI systems must include effective human oversight:

  • Human-in-the-loop: A human can intervene in every decision cycle
  • Human-on-the-loop: A human monitors the system and can intervene when needed
  • Human-in-command: A human can override or reverse AI decisions

Your implementation must ensure:

  1. The human overseer understands the AI system's capabilities and limitations
  2. They can correctly interpret the system's output
  3. They can decide not to use the system or disregard its output
  4. They can stop the system or intervene in its operation

Step 5: Set Up Monitoring and Incident Reporting

Post-market monitoring is mandatory for high-risk systems:

  • Continuous monitoring of AI system performance in production
  • Serious incident reporting to national authorities within defined timeframes
  • Performance degradation tracking -you must detect when your system's accuracy drops
  • Bias monitoring -ongoing checks for discriminatory outcomes

Reporting Obligations

| Event | Reporting Deadline | Report To | |-------|-------------------|-----------| | Serious incident (death, serious damage to health, property, environment) | Without undue delay after becoming aware | National market surveillance authority | | Systematic misuse detection | 15 days | National market surveillance authority | | Significant modification to the system | Before placing modified system on market | Notified body (if applicable) |

Step 6: Prepare for Conformity Assessments

Depending on your AI system category:

Self-Assessment (Most High-Risk Systems)

  • Complete internal conformity assessment procedure
  • Document compliance with all applicable requirements
  • Issue EU Declaration of Conformity
  • Affix CE marking

Third-Party Assessment (Biometric, Critical Infrastructure, Law Enforcement)

  • Engage an EU-notified body for independent assessment
  • Provide full technical documentation access
  • Undergo testing and audit procedures
  • Obtain conformity certificate

Key Enforcement Deadlines

| Date | What Happens | |------|-------------| | February 2, 2025 | Prohibited AI practices ban takes effect | | August 2, 2025 | Rules for general-purpose AI models apply; governance framework operational | | August 2, 2026 | Main enforcement date -most obligations for high-risk AI systems apply | | August 2, 2027 | Remaining obligations for high-risk AI embedded in regulated products |

Common Mistakes to Avoid

  1. Assuming "low risk" means "no obligations" -Article 50 transparency requirements apply to nearly all AI systems
  2. Treating this as a one-time project -compliance requires ongoing monitoring and documentation updates
  3. Ignoring the supply chain -if you use third-party AI models (like GPT-4 or Claude), you may still have obligations as a "deployer"
  4. Underestimating documentation requirements -high-risk systems require extensive, structured documentation that regulators can audit
  5. Waiting for national implementation -the EU AI Act is a regulation, not a directive. It applies directly in all member states.

How AI Comply HQ Helps

Instead of navigating 180+ pages of legislation manually, AI Comply HQ turns compliance into a 20-minute guided conversation:

  1. Start an interview -Choose voice or chat mode
  2. Answer targeted questions -Our AI asks about your specific system, guided by the actual regulatory text
  3. Get classified automatically -Your risk tier is determined based on your answers
  4. Receive auto-filled compliance forms -Technical documentation, risk assessments, and declarations generated from your responses
  5. Review and download -Edit any field, approve, and export your complete compliance package

No spreadsheets. No consultants billing €500/hour. No guesswork about which articles apply to you.

Assess Your Compliance in Minutes

Start a guided EU AI Act compliance interview: voice or chat. The AI asks the right questions, auto-fills your forms, and generates your compliance package.

Start 7-Day Free Trial

Assess Your Compliance in Minutes

Start a guided EU AI Act compliance interview: voice or chat. The AI asks the right questions, auto-fills your forms, and generates your compliance package.

Start 7-Day Free Trial