← Back to Resources
Risk Assessment15 min read

How to Conduct an EU AI Act Risk Assessment (Step-by-Step)

A step-by-step guide to conducting EU AI Act risk assessments for your AI systems. Learn the four risk tiers, required documentation, and how to classify your system before enforcement deadlines.

AI Comply HQ Team·

Why Risk Assessment Is the Foundation of EU AI Act Compliance

Every obligation under the EU AI Act flows from a single determination: what risk tier does your AI system fall into? Get this wrong and you either over-invest in compliance theatre for a minimal-risk spam filter, or -- far worse -- under-comply on a high-risk recruitment tool that exposes your organization to fines of up to EUR 35 million or 7% of global annual turnover.

Risk assessment is not optional. It is the first mandatory step before any AI system can be placed on the EU market or put into service within the EU. And the clock is running.

Enforcement Deadlines You Cannot Ignore

| Milestone | Date | What It Means for You | |-----------|------|-----------------------| | Prohibited AI practices banned | February 2, 2025 | Already in effect. If you operate any banned system, you are already non-compliant. | | General-purpose AI model rules apply | August 2, 2025 | Providers of foundation models must meet transparency and documentation obligations. | | Main enforcement for high-risk systems | August 2, 2026 | Full compliance required: risk management systems, technical documentation, conformity assessments, CE marking. | | High-risk AI in regulated products | August 2, 2027 | AI safety components embedded in machinery, medical devices, vehicles, etc. |

If you are reading this in 2026, you have months -- not years -- to complete your risk assessment, build your documentation, and demonstrate compliance. Waiting for "national guidance" is not a viable strategy. The EU AI Act is a regulation, not a directive. It applies directly in all 27 member states without transposition.

The Four Risk Tiers Explained

The EU AI Act classifies AI systems into four tiers based on the potential harm they can cause. Each tier carries different obligations. Understanding these tiers is the core of your risk assessment.

Tier 1: Unacceptable Risk (Prohibited)

These AI practices are banned entirely under Article 5. There is no compliance pathway -- you must decommission them.

Prohibited practices include:

  • Social scoring by public authorities that leads to detrimental treatment of individuals
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with very narrow exceptions requiring prior judicial authorization)
  • Emotion recognition in the workplace or educational institutions
  • Exploitation of vulnerabilities -- AI systems designed to manipulate people based on their age, disability, or social or economic situation
  • Untargeted facial image scraping from the internet or CCTV footage to build recognition databases
  • Predictive policing based solely on profiling or personality traits (without objective, verifiable facts)
  • Biometric categorization systems that infer sensitive attributes like race, political opinions, sexual orientation, or religious beliefs

Examples in practice:

  • An employer using camera-based emotion detection during job interviews -- prohibited
  • A government using AI to rank citizens based on social behavior for access to services -- prohibited
  • A retailer scraping social media photos to build a customer facial recognition database -- prohibited

Tier 2: High Risk

High-risk AI systems are permitted but carry the heaviest compliance burden. These are defined in two ways under Articles 6 and Annex III:

Category 1 -- AI as a safety component of a regulated product (Article 6(1)):

  • Medical devices and in-vitro diagnostics
  • Motor vehicles and their trailers
  • Civil aviation systems
  • Marine equipment
  • Toys, lifts, pressure equipment, machinery
  • Radio equipment

Category 2 -- Standalone high-risk AI (Annex III):

  • Biometrics: Remote biometric identification (non-real-time), biometric categorization, emotion recognition (where not prohibited)
  • Critical infrastructure: AI managing energy, water, gas, heating, or digital infrastructure; road traffic and air traffic management
  • Education: AI determining access to educational institutions, evaluating learning outcomes, monitoring students during exams
  • Employment: AI used in recruitment, job application filtering, promotion decisions, task allocation, performance monitoring, termination decisions
  • Essential services: AI for creditworthiness assessment, health and life insurance risk assessment, emergency service dispatch
  • Law enforcement: AI for risk assessment of individuals, polygraph tools, evidence reliability assessment, crime prediction
  • Migration and border control: AI for risk assessment of irregular migration, visa and asylum application processing
  • Justice and democracy: AI assisting judicial authorities in interpreting facts and applying law

Examples in practice:

  • An HR SaaS platform that uses AI to rank and filter job applicants -- high-risk
  • A fintech tool that uses AI models to determine creditworthiness -- high-risk
  • An AI-powered diagnostic support tool used in hospitals -- high-risk (also regulated as a medical device)
  • A university admissions system using AI to score applicants -- high-risk

Tier 3: Limited Risk

Limited-risk AI systems have only transparency obligations under Article 50. No technical documentation or conformity assessment is required, but you must be honest with users about what they are interacting with.

Obligations:

  • Disclose to users that they are interacting with an AI system (chatbots, virtual assistants)
  • Label AI-generated content (text, images, audio, video) as artificially generated or manipulated, using machine-readable formats
  • Clearly disclose deepfakes depicting real people or events

Examples in practice:

  • A customer support chatbot -- limited risk (must disclose AI interaction)
  • An AI writing assistant that generates marketing copy -- limited risk (output must be labeled)
  • An AI tool generating realistic product images -- limited risk (images must carry synthetic content metadata)

Tier 4: Minimal Risk

The vast majority of AI systems in use today fall into this category. No specific regulatory obligations apply, though the EU encourages voluntary adoption of codes of conduct.

Examples in practice:

  • Spam email filters
  • AI-powered video game NPCs
  • Inventory optimization algorithms
  • Recommendation engines for media content
  • Weather prediction models
  • Automated grammar checkers

Step-by-Step: How to Classify Your AI System

Use this decision-tree approach to determine your risk tier. Work through each question in order.

Step 1: Screen for Prohibited Practices

Ask these questions about every AI system you operate:

  1. Does the system assign scores to individuals based on social behavior that lead to detrimental treatment?
  2. Does the system use real-time biometric identification in public spaces?
  3. Does the system detect emotions in a workplace or school?
  4. Does the system target vulnerable groups (children, elderly, disabled) through manipulative techniques?
  5. Does the system scrape facial images from the internet or CCTV without consent for building recognition databases?
  6. Does the system predict criminal behavior based solely on profiling?

If YES to any question: Your system is prohibited. Decommission it or fundamentally redesign it to eliminate the prohibited functionality.

If NO to all questions: Proceed to Step 2.

Step 2: Check If Your System Is a Safety Component of a Regulated Product

  1. Is your AI system embedded in or used as a safety component of a product covered by EU harmonization legislation listed in Annex I? (Medical devices, vehicles, machinery, toys, lifts, etc.)
  2. Is the AI system itself required to undergo a third-party conformity assessment under that product legislation?

If YES to both: Your system is high-risk under Article 6(1). You must comply with all high-risk requirements AND the relevant product safety legislation.

If NO: Proceed to Step 3.

Step 3: Check Against the Annex III Use-Case List

Compare your system's intended purpose against the eight domains in Annex III:

| Domain | Key Question | |--------|-------------| | Biometrics | Does your system identify, categorize, or detect emotions in people using biometric data? | | Critical infrastructure | Does your system manage or operate energy, water, transport, or digital infrastructure? | | Education | Does your system determine access to education, evaluate students, or monitor exams? | | Employment | Does your system recruit, filter applicants, evaluate employees, or make HR decisions? | | Essential services | Does your system assess creditworthiness, set insurance premiums, or dispatch emergency services? | | Law enforcement | Does your system assess risk of individuals, evaluate evidence, or predict crime? | | Migration | Does your system process visa/asylum applications or assess migration risks? | | Justice | Does your system assist courts in interpreting facts or applying law? |

If your system matches any Annex III domain: Your system is provisionally high-risk. But there is one escape hatch -- proceed to Step 3b.

Step 3b: Apply the Article 6(3) Exception

Even if your system matches an Annex III category, it may be exempted from high-risk classification if it meets all of the following conditions:

  • The AI system performs a narrow procedural task
  • The AI system improves the result of a previously completed human activity
  • The AI system does not replace or influence human decision-making without meaningful human review
  • The AI system is preparatory to an assessment that is relevant for the Annex III use case

If all four conditions are met: You may classify your system as not high-risk, but you must document this determination and make it available to national authorities upon request.

If any condition is not met: Your system is high-risk. Comply with all Chapter III requirements.

Step 4: Check for Transparency Obligations

If your system is not high-risk or prohibited, determine if it has limited-risk transparency obligations:

  1. Does your system interact directly with natural persons? (chatbot, virtual assistant)
  2. Does your system generate synthetic text, images, audio, or video?
  3. Does your system create deepfakes?

If YES to any: Your system is limited risk. Implement the required transparency disclosures.

If NO to all: Your system is minimal risk. No mandatory obligations, but consider voluntary codes of conduct.

Step 5: Document Your Classification

Regardless of the outcome, write it down. Create a formal risk classification record that includes:

  • System name and version
  • Date of assessment
  • Assessor name and role
  • Intended purpose of the system
  • Each decision-tree question and your answer
  • Supporting evidence for each answer
  • Final classification determination
  • Review schedule (reassess when the system changes)

This documentation serves two purposes: it protects you during regulatory audits, and it forces your team to rigorously think through what your system actually does versus what you think it does.

Documentation Requirements by Risk Tier

Once you have classified your system, here is exactly what each tier requires:

Unacceptable Risk -- Prohibited

| Requirement | Details | |-------------|---------| | System decommissioning record | Document what the system was, when it was decommissioned, and confirmation of data deletion | | Notification (if applicable) | If operating cross-border, notify relevant national authorities |

High Risk -- Full Compliance Package

| Document | Key Contents | When Required | |----------|-------------|---------------| | Risk Management System (Art. 9) | Risk identification, estimation, evaluation, mitigation measures, residual risk acceptance criteria, testing protocols | Before market placement; updated continuously | | Data Governance (Art. 10) | Training/validation/test data specifications, relevance assessment, bias examination, data gap analysis | Before market placement | | Technical Documentation (Art. 11) | System description, design choices, architecture, algorithms, training process, performance metrics, cybersecurity measures | Before market placement; updated for significant changes | | Record-Keeping / Logging (Art. 12) | Automatic logging capabilities, traceability of AI decisions, audit trail specifications | Built into system design | | Transparency & User Instructions (Art. 13) | Intended purpose, accuracy levels, known limitations, human oversight requirements, expected lifetime, maintenance needs | Provided with system | | Human Oversight (Art. 14) | Oversight mechanisms, competency requirements for human overseers, ability to override/reverse decisions | Built into system operation | | Accuracy, Robustness, Cybersecurity (Art. 15) | Accuracy metrics, resilience to errors/faults, protection against adversarial attacks, fallback plans | Tested before market placement | | Quality Management System (Art. 17) | Compliance strategy, design and development procedures, testing and validation processes, staff competency records | Organizational requirement | | EU Declaration of Conformity (Art. 47) | Formal declaration that all applicable requirements are met | Before market placement | | Conformity Assessment (Art. 43) | Self-assessment or third-party audit (biometrics and some critical infrastructure require third-party) | Before market placement | | Post-Market Monitoring Plan (Art. 72) | Ongoing performance monitoring, incident detection and reporting, system update procedures | Continuous |

Limited Risk -- Transparency Only

| Requirement | Details | |-------------|---------| | AI interaction disclosure | Clear notice to users that they are interacting with AI | | Synthetic content labeling | Machine-readable metadata on AI-generated content (C2PA or equivalent) | | Deepfake disclosure | Visible labeling of manipulated content depicting real people/events |

Minimal Risk -- No Mandatory Requirements

| Recommendation | Details | |----------------|---------| | Voluntary code of conduct | Consider adopting the EU's recommended voluntary codes for responsible AI use | | Internal documentation | Good practice to maintain basic records of AI systems for organizational awareness |

Five Common Classification Mistakes Companies Make

1. Defaulting to "Minimal Risk" Without Analysis

The most dangerous mistake. Companies assume their AI system is low-risk because it "just" does recommendations, filtering, or summarization. But if that filtering happens to involve job applicants, insurance claims, or educational assessments, it is squarely in high-risk territory. The risk tier is determined by use case and impact, not by technical complexity.

2. Ignoring the Deployer's Obligations

You did not build the AI model. You licensed it from a vendor or use it via API. That does not exempt you. The EU AI Act creates obligations for providers (who build) and deployers (who use). If you deploy a high-risk AI system, you must:

  • Ensure human oversight is implemented
  • Monitor the system for risks
  • Report serious incidents
  • Conduct a Fundamental Rights Impact Assessment (FRIA) if you are a public body or provide essential services
  • Keep logs generated by the system for at least six months

Deploying someone else's model does not make their compliance your compliance.

3. Treating Classification as a One-Time Exercise

Your AI system evolves. You fine-tune models, add features, expand use cases, change training data. Each significant change can shift your risk classification. A customer support chatbot (limited risk) that you later integrate into your hiring workflow becomes a high-risk system. Your risk assessment must be a living document with a defined review cadence.

4. Conflating GDPR Compliance with EU AI Act Compliance

They are related but distinct regulatory frameworks. A Data Protection Impact Assessment (DPIA) under GDPR does not satisfy the EU AI Act's risk management requirements. You need both. The AI Act's risk management system (Article 9) requires AI-specific analysis: training data bias, performance degradation, adversarial vulnerability, and alignment between intended purpose and actual deployment.

5. Relying on the Article 6(3) Exception Without Documentation

Some companies learn about the exception for "narrow procedural tasks" and immediately claim all their Annex III systems qualify. This is risky. The exception requires meeting all four conditions simultaneously, and you must document this determination. If a regulator audits you and you cannot demonstrate that your system truly performs only a narrow procedural task that does not replace human judgment, the exception collapses -- and you are non-compliant with every high-risk requirement you skipped.

How AI Comply HQ's Interview Mirrors This Process

If the decision tree above looks daunting, you are not alone. The classification process involves interpreting regulatory text against your specific technical and operational reality. It demands expertise in both EU law and AI systems -- a combination most organizations do not have in-house.

This is precisely the problem AI Comply HQ was built to solve.

When you start an AI Comply HQ interview, the system walks you through the same risk classification logic outlined in this guide -- but as a guided conversation rather than a static checklist. Here is how it maps:

  1. Prohibited practices screening. The interview begins by asking targeted questions about your system's functionality to rule out (or flag) any prohibited use cases under Article 5.

  2. Product safety check. If your AI system is embedded in a physical product, the interview identifies whether it falls under EU harmonized product legislation and routes you to the correct compliance pathway.

  3. Annex III domain matching. Through a series of plain-language questions about what your system does and who it affects, the interview determines whether your system falls within any of the eight high-risk domains. No regulatory jargon required -- you describe your system; the AI maps it to the law.

  4. Article 6(3) exception analysis. If your system matches an Annex III category, the interview probes whether the narrow-task exception applies -- asking about human review processes, decision-making authority, and the preparatory nature of your system's output.

  5. Documentation generation. Based on your classification, AI Comply HQ auto-generates the compliance documents your tier requires -- from transparency disclosures for limited-risk systems to full technical documentation packages for high-risk systems. Every form field is pre-filled from your interview responses. You review, edit where needed, and export.

The entire process takes roughly 20 minutes. It replaces weeks of consultant-led workshops with a structured, auditable, repeatable assessment that you can run for every AI system in your portfolio.

Your risk classification is only as reliable as the process behind it. An unstructured internal discussion produces a different (and indefensible) result than a systematic, documented assessment. AI Comply HQ gives you the system.

Assess Your Compliance in Minutes

Start a guided EU AI Act compliance interview: voice or chat. The AI asks the right questions, auto-fills your forms, and generates your compliance package.

Start 7-Day Free Trial

Assess Your Compliance in Minutes

Start a guided EU AI Act compliance interview: voice or chat. The AI asks the right questions, auto-fills your forms, and generates your compliance package.

Start 7-Day Free Trial