← Back to Resources
Compliance Guides18 min read

EU AI Act Prohibited AI Practices: Is Your System at Risk?

Eight categories of AI practices are now banned under the EU AI Act. Learn which AI uses are prohibited, the narrow exceptions, and how to audit your systems before enforcement actions begin.

AI Comply HQ Team·

The Most Dangerous Category in the EU AI Act

When most people hear about the EU AI Act, they think about high-risk AI systems and the documentation burden that comes with them. But there is a tier above high-risk that carries no compliance pathway at all: prohibited practices. If your AI system falls into one of these eight categories, there is no amount of documentation, auditing, or risk mitigation that will save you. The system must be shut down, full stop.

Article 5 of the EU AI Act defines eight categories of AI practices that are considered so fundamentally incompatible with EU values -- human dignity, non-discrimination, democracy, the rule of law -- that they are banned outright. This prohibition took effect on February 2, 2025, making it the first enforceable provision of the entire regulation.

The penalties for operating a prohibited AI system are the harshest in the Act: fines of up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. For a company with EUR 500 million in global revenue, that translates to a potential EUR 35 million fine for a single violation.

This article walks through each of the eight prohibited categories in detail, explains who is affected, identifies the narrow exceptions, and provides a practical audit framework you can use to assess your own AI systems.

The Eight Prohibited AI Practices Under Article 5

1. Social Scoring by Public Authorities

What is banned: AI systems used by public authorities (or on their behalf) to evaluate or classify natural persons based on their social behavior or personal characteristics, where the resulting social score leads to detrimental or unfavorable treatment that is unjustified or disproportionate.

Why it is banned: Social scoring treats people as data points rather than individuals. It creates chilling effects on free expression and freedom of assembly, and it systematically disadvantages marginalized communities.

Real-world example: China's Social Credit System is the most widely cited precedent. Under this system, citizens receive a score based on financial behavior, social media activity, purchasing patterns, and interactions with government services. Low scores can restrict access to travel, education, and financial products. Any analogous system deployed within or targeting persons in the EU would be illegal under Article 5.

What catches companies off guard: This prohibition extends beyond government agencies. If a private company builds or operates a social scoring system on behalf of a public authority -- for example, under a government procurement contract -- that company is equally liable. Additionally, loyalty or reputation scoring systems operated by private entities could fall within this prohibition if they produce effects that are "unjustified or disproportionate to social behavior."

2. Exploiting Vulnerabilities of Specific Groups

What is banned: AI systems that deploy subliminal techniques, manipulative methods, or deceptive practices to exploit the vulnerabilities of a person due to their age, disability, or specific social or economic situation, where such exploitation materially distorts their behavior in a way that causes or is reasonably likely to cause significant harm.

Why it is banned: Targeting people who lack the capacity or resources to protect themselves is a fundamental violation of consumer protection principles and human dignity under the EU Charter of Fundamental Rights.

Real-world examples:

  • An AI-powered lending platform that detects signs of cognitive decline in elderly users and steers them toward high-interest, unsuitable financial products
  • A mobile game that uses AI to identify children and adapt in-game purchase prompts to maximize spending by exploiting their limited understanding of monetary value
  • A payday loan chatbot that monitors a user's browsing patterns to detect financial distress and increases the urgency of its messaging accordingly

Key nuance: The prohibition is not limited to systems that intend to exploit. If the system's design has the foreseeable effect of exploiting vulnerabilities -- even as a side effect of optimizing for engagement or revenue -- it may still be caught by this provision. This is an outcome-based test, not just an intent-based one.

3. Real-Time Remote Biometric Identification in Publicly Accessible Spaces

What is banned: The use of real-time remote biometric identification systems (most commonly live facial recognition) in publicly accessible spaces for law enforcement purposes.

Why it is banned: Mass biometric surveillance in public spaces fundamentally undermines the right to privacy and the right to anonymous movement. It creates a surveillance infrastructure that is inherently incompatible with democratic society.

Real-world examples:

  • Live facial recognition cameras deployed across a city center that scan every passerby against a law enforcement database
  • An AI-powered CCTV network at a public transit station that identifies individuals in real-time using gait recognition
  • A drone surveillance system deployed at a public protest that uses biometric identification to identify attendees

The narrow law enforcement exceptions: This is the only prohibited category where the EU AI Act carves out specific, limited exceptions. Real-time remote biometric identification may be permitted when all of the following conditions are met:

  1. Authorized by a judicial authority (or an independent administrative authority in urgent cases, with judicial approval sought within 48 hours)
  2. Limited in time and geographic scope to what is strictly necessary
  3. Subject to a prior fundamental rights impact assessment
  4. Registered in the EU high-risk AI database
  5. Used only for one of three permitted purposes:
    • Targeted search for specific victims of abduction, trafficking, or sexual exploitation
    • Prevention of a specific, substantial, and imminent threat to life or a reasonably foreseeable terrorist attack
    • Locating or identifying a suspect of a specific criminal offense punishable by at least four years of imprisonment

| Exception Condition | Requirement | |---|---| | Authorization | Judicial authority (or independent admin body with 48h judicial follow-up) | | Scope limitation | Strict temporal and geographic boundaries | | Impact assessment | Fundamental rights impact assessment completed beforehand | | Registration | System registered in EU high-risk AI database | | Permitted purposes | Victim search, imminent threat prevention, or suspect identification for serious crimes (4+ year sentences) |

What this means in practice: Even law enforcement agencies that qualify for an exception face an extremely high procedural bar. For private companies, there is no exception at all. A shopping mall deploying live facial recognition to identify shoplifters, a stadium scanning fans' faces at entry, or a private security firm using real-time biometric identification in a public park would all be unambiguously prohibited.

4. Emotion Recognition in Workplaces and Educational Institutions

What is banned: AI systems that infer the emotions of natural persons in the areas of the workplace and educational institutions, except where the system is intended to be used for medical or safety reasons.

Why it is banned: Emotion recognition in power-asymmetric environments -- where employers control livelihoods and educators control academic futures -- creates coercive surveillance that chills authentic human behavior and disproportionately affects people with certain disabilities, neurodivergent individuals, and people from different cultural backgrounds.

Real-world examples:

  • A call center management tool that monitors agent facial expressions and voice tone to generate "engagement scores" used in performance reviews
  • A proctoring system that flags students as "suspicious" based on eye movement patterns and facial micro-expressions during exams
  • A hiring platform that analyzes video interview footage to score candidates on "emotional intelligence" or "cultural fit"
  • Classroom monitoring software that tracks students' attention levels through webcam-based emotion detection

The medical/safety exception: Emotion recognition is still permitted in workplaces and schools when it serves a genuine medical or safety purpose. For example:

  • A fatigue detection system in a truck cab that alerts a driver when it detects signs of drowsiness (safety purpose)
  • A therapeutic tool used by a school psychologist to support students with autism in understanding emotional cues (medical purpose)

These exceptions are narrowly construed. Using the same fatigue detection data for productivity monitoring or performance management would cross the line back into prohibited territory.

5. Untargeted Scraping of Facial Images

What is banned: AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

Why it is banned: Indiscriminate collection of biometric data without consent violates the fundamental right to data protection under GDPR and the EU Charter, and creates databases that enable mass surveillance at scale.

Real-world example: Clearview AI is the paradigmatic case. The company scraped billions of facial images from social media platforms, news websites, and other public sources to build a facial recognition database marketed to law enforcement and private companies. Under the EU AI Act, the very act of building such a database through untargeted scraping is prohibited -- regardless of how the database is subsequently used.

Key distinctions:

  • Untargeted scraping (prohibited): Crawling the open internet or bulk CCTV feeds to amass facial images from anyone and everyone
  • Targeted collection (not prohibited by this article, but subject to other laws): Collecting facial images with consent for a specific, lawful purpose -- for example, enrolling employees in a voluntary facial recognition access control system

6. Predictive Policing Based Solely on Profiling

What is banned: AI systems used by law enforcement to make assessments of the risk of a natural person offending or reoffending, or to predict the occurrence or reoccurrence of criminal activity, based solely on the profiling of a natural person or the assessment of their personality traits and characteristics.

Why it is banned: Predictive policing based on profiling entrenches historical biases in the criminal justice system, disproportionately targets minority communities, and violates the presumption of innocence.

Real-world examples:

  • A system that assigns "risk scores" to individuals based on their neighborhood, age, socioeconomic status, and prior police contact -- and uses those scores to direct patrol resources
  • A recidivism prediction tool that scores defendants at sentencing based on demographic and behavioral profiles without reference to the specific facts of the offense

What is NOT banned: AI tools that support law enforcement based on objective, verifiable facts directly linked to criminal activity remain permissible. For example:

  • AI analysis of crime scene evidence (DNA, fingerprints, digital forensics)
  • Pattern analysis of transaction data to detect fraud when linked to specific factual indicators
  • Geographic crime mapping based on reported incident data (without individual profiling)

The critical word is "solely." If an AI system uses profiling as one input among many, including objective factual elements, it may not fall under this prohibition -- but it would almost certainly qualify as a high-risk system with its own extensive compliance obligations.

7. Biometric Categorization Inferring Sensitive Attributes

What is banned: AI systems that categorize natural persons based on their biometric data to infer or deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

Why it is banned: Using biometric data to infer protected characteristics enables discrimination at scale. It converts physical appearance into a tool for sorting people by attributes that are protected under EU anti-discrimination law.

Real-world examples:

  • An AI system that analyzes facial features to classify people by ethnic origin for targeted advertising or content filtering
  • A voice analysis tool that claims to infer sexual orientation from speech patterns
  • A gait recognition system used at a place of worship to identify and categorize attendees by religious affiliation

Important exception: This prohibition does not apply to the labeling or filtering of lawfully acquired biometric datasets in the area of law enforcement. For example, a law enforcement agency can still tag images in an evidence database by observable characteristics to organize case files. However, deploying an AI system that infers protected attributes from biometric data for operational decision-making would be caught.

8. Subliminal Manipulation Techniques

What is banned: AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting a person's behavior, causing or being reasonably likely to cause that person or another person significant harm.

Why it is banned: People have the right to make autonomous decisions. AI systems designed to circumvent conscious decision-making -- whether through imperceptible stimuli, addictive design patterns, or deceptive interfaces -- undermine this autonomy.

Real-world examples:

  • An AI recommendation engine that uses subliminal audio cues to influence purchasing decisions
  • A political campaign tool that generates micro-targeted content designed to manipulate voting behavior through psychographic profiling and deceptive framing
  • A social media algorithm optimized to maximize engagement through techniques that exploit psychological vulnerabilities (e.g., intermittent reinforcement, fear of missing out) in ways that cause demonstrable harm

Key threshold: Not all persuasive AI is prohibited. The threshold requires either (a) subliminal techniques beyond consciousness, or (b) purposefully manipulative or deceptive techniques, combined with (c) material distortion of behavior and (d) significant harm. A product recommendation engine that suggests items based on purchase history is persuasive but not manipulative. An engine that uses dark patterns and psychological exploitation to induce compulsive purchasing crosses the line.

Summary: All Eight Prohibited Practices at a Glance

| # | Prohibited Practice | Key Trigger | Effective Date | |---|---|---|---| | 1 | Social scoring by governments | Public authority evaluates/classifies persons; score leads to unjustified detriment | Feb 2, 2025 | | 2 | Exploiting vulnerabilities | AI exploits age, disability, or socioeconomic situation to distort behavior causing harm | Feb 2, 2025 | | 3 | Real-time biometric ID in public | Live biometric identification in publicly accessible spaces (narrow LE exceptions) | Feb 2, 2025 | | 4 | Workplace/education emotion recognition | Inferring emotions at work or school (except medical/safety) | Feb 2, 2025 | | 5 | Untargeted facial image scraping | Building facial recognition databases through indiscriminate scraping | Feb 2, 2025 | | 6 | Predictive policing via profiling | Risk assessment based solely on profiling without objective facts | Feb 2, 2025 | | 7 | Biometric categorization of sensitive attributes | Inferring race, religion, orientation, etc. from biometric data | Feb 2, 2025 | | 8 | Subliminal/manipulative techniques | Subliminal or deceptive AI causing material behavioral distortion and harm | Feb 2, 2025 |

How to Audit Your AI Systems for Prohibited Practices

If you operate AI systems that touch any of the domains above, you need a structured audit process. Here is a practical five-step framework.

Step 1: Create a Complete AI System Inventory

You cannot assess what you do not know about. Build a comprehensive register of every AI system your organization develops, deploys, or procures. For each system, document:

  • Purpose and function -- what does it do?
  • Data inputs -- what data does it process? Does it use biometric data, behavioral data, or personal characteristics?
  • Decision outputs -- what decisions or classifications does it produce?
  • Affected populations -- who is subject to the system's outputs?
  • Deployment context -- where is it used (workplace, public space, educational institution, law enforcement)?

Step 2: Screen Against All Eight Prohibited Categories

For each system in your inventory, run it through a checklist against all eight categories:

  • [ ] Does this system score, rate, or classify people based on social behavior or personal characteristics in a way that could lead to unfavorable treatment?
  • [ ] Does this system interact with or target vulnerable populations (children, elderly, persons with disabilities, persons in financial distress)?
  • [ ] Does this system use real-time biometric identification in any publicly accessible space?
  • [ ] Does this system detect, infer, or analyze emotions in a workplace or educational setting?
  • [ ] Was the system trained on, or does it contribute to, a facial recognition database built through untargeted scraping?
  • [ ] Does this system predict criminal behavior or recidivism based on individual profiling?
  • [ ] Does this system use biometric data to categorize people by race, religion, political opinion, sexual orientation, or other sensitive attributes?
  • [ ] Does this system use subliminal, manipulative, or deceptive techniques that could materially distort user behavior?

Step 3: Assess Borderline Cases

Many systems will not fall neatly into one category. For borderline cases, apply these principles:

  • Err on the side of caution. If a reasonable regulator could interpret your system as prohibited, treat it as prohibited until you have obtained clear legal guidance.
  • Look at effects, not just intent. The EU AI Act assesses many of these prohibitions based on outcomes, not just design goals. If a system was designed for one purpose but has the effect of exploitation, manipulation, or biometric categorization, it may still be caught.
  • Consider the full system lifecycle. A system that is compliant at launch may become non-compliant after updates, retraining, or changes in deployment context.

Step 4: Document Your Assessment

For every AI system, record your prohibited-practice assessment in writing. This serves two purposes:

  1. Regulatory defense: If a regulator investigates, you can demonstrate that you conducted a good-faith assessment
  2. Internal governance: It forces your team to think critically about each system's risk profile

Your documentation should include the system name, the date of assessment, the assessor, the conclusion for each of the eight categories, and the reasoning behind each conclusion.

Step 5: Remediate or Decommission

If your audit identifies a system that falls within a prohibited category:

  • Immediate action required. These prohibitions are already in effect. There is no grace period.
  • No mitigation pathway. Unlike high-risk systems, prohibited systems cannot be made compliant through additional safeguards. The activity itself is banned.
  • Decommission and document. Shut down the system, remove it from production, and document the decommissioning process for your records.

What Happens If You Are Caught

The enforcement regime for prohibited practices is the most severe in the EU AI Act:

| Violation | Maximum Fine | |---|---| | Operating a prohibited AI system | EUR 35 million or 7% of worldwide annual turnover (whichever is higher) | | For SMEs and startups | Proportionally lower caps apply, but still significant | | Supplying incorrect information to regulators | EUR 7.5 million or 1% of turnover |

Beyond financial penalties, enforcement can include:

  • Orders to withdraw the AI system from the EU market
  • Public disclosure of the violation, which carries reputational consequences
  • Individual liability for officers who knowingly authorized the deployment of a prohibited system
  • Follow-on litigation from affected individuals under GDPR and national tort law

National market surveillance authorities in each EU member state are responsible for enforcement, with coordination at the EU level through the European AI Office. Several member states have already indicated that prohibited practices will be their top enforcement priority.

Why This Matters Even If You Think Your System Is "Low Risk"

Many organizations assume that the prohibited practices list applies only to surveillance companies and government agencies. This is a dangerous misconception for three reasons.

First, the categories are broader than they appear. "Emotion recognition in the workplace" does not just mean dedicated sentiment analysis platforms. It captures any AI system that infers emotional states -- including features embedded in video conferencing tools, HR platforms, customer service dashboards, or learning management systems. If your AI vendor added an "engagement scoring" feature to your existing platform, you may already be operating a prohibited system without realizing it.

Second, the supply chain creates hidden exposure. If you procure AI-powered tools from third-party vendors, the prohibition applies to you as a deployer. You cannot outsource compliance responsibility by pointing to a vendor's terms of service. Under the EU AI Act, deployers are independently obligated to ensure they are not operating prohibited systems.

Third, enforcement is retroactive to February 2025. The prohibition has been in effect since February 2, 2025. If your organization has been operating a prohibited system since that date, you already have regulatory exposure. Every day of continued operation increases the severity of a potential enforcement action.

The Bottom Line

The prohibited practices tier of the EU AI Act is not a theoretical risk. It is an immediate, enforceable legal obligation with the highest penalties in the regulation. Whether you are a multinational enterprise or a ten-person startup, the question is not whether these rules apply to you -- it is whether you have verified that they do not.

An audit is not optional. It is the cost of operating AI systems in the EU market.

How AI Comply HQ Can Help

Navigating the prohibited practices analysis does not have to be a manual, spreadsheet-driven exercise. AI Comply HQ walks you through a guided compliance interview that specifically screens for prohibited practice exposure as the very first step in your risk classification.

  1. Start a compliance interview -- voice or chat, your choice
  2. Answer targeted questions about your AI system's purpose, data inputs, deployment context, and affected populations
  3. Receive an instant risk classification that flags any prohibited practice concerns before moving to high-risk or limited-risk analysis
  4. Get auto-generated documentation including your prohibited practice assessment, risk tier determination, and recommended next steps
  5. Export and share your compliance package with legal, regulators, or stakeholders

No guesswork. No EUR 500-per-hour consultants. No gaps in your assessment.

Assess Your Compliance in Minutes

Start a guided EU AI Act compliance interview: voice or chat. The AI asks the right questions, auto-fills your forms, and generates your compliance package.

Start 7-Day Free Trial

Assess Your Compliance in Minutes

Start a guided EU AI Act compliance interview: voice or chat. The AI asks the right questions, auto-fills your forms, and generates your compliance package.

Start 7-Day Free Trial