← All posts
Deep DiveMarch 1, 20268 min

Article 5 Deep Dive: The 8 AI Practices Banned in the EU (Already Enforceable)

Article 5 of the EU AI Act lists AI practices that are outright prohibited in the European Union. Unlike high-risk obligations (August 2026), these bans have been enforceable since February 2, 2025.

The 8 Prohibited Practices

1. Subliminal Manipulation **Banned:** AI that uses subliminal techniques beyond a person's consciousness to materially distort behavior, causing or likely to cause significant harm.

Real-world example: A dark pattern that uses AI to subtly alter interface elements (colors, timing, placement) in ways users can't consciously detect, pushing them toward purchases they wouldn't otherwise make.

Key nuance: The manipulation must be "beyond consciousness" AND cause "significant harm." Standard personalization (showing relevant products) is fine. Using AI to exploit cognitive biases users can't perceive is not.

2. Exploitation of Vulnerabilities **Banned:** AI that exploits vulnerabilities related to age, disability, or social/economic situation to materially distort behavior.

Real-world example: An AI system that detects a user is elderly (from browsing patterns) and presents misleading financial product information, or a system that targets people in financial distress with predatory loan offers.

Key nuance: The vulnerability must be specific (age, disability, economic status) and the exploitation must be intentional or foreseeable.

3. Social Scoring by Public Authorities **Banned:** AI systems used by public authorities for social scoring — evaluating or classifying people based on social behavior or personal characteristics, where the score leads to detrimental treatment.

Real-world example: A government system that assigns citizens a trustworthiness score based on their online behavior, social connections, and purchasing patterns, then uses that score to determine access to services.

Key nuance: This specifically targets government social scoring. Private credit scoring (like FICO) is regulated under high-risk, not prohibited.

4. Individual Crime Risk Prediction **Banned:** AI that predicts the risk of a natural person committing a criminal offense, based solely on profiling or personality traits.

Real-world example: A policing system that flags individuals as likely criminals based on their demographic profile, social media activity, or neighborhood, without any connection to actual criminal behavior.

Key nuance: The ban is on profiling-based prediction. AI that analyzes specific criminal evidence (e.g., forensic analysis) is not prohibited.

5. Untargeted Facial Image Scraping **Banned:** Creating facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

Real-world example: Clearview AI's business model — scraping billions of photos from social media to build a facial recognition database — is exactly what this prohibition targets.

6. Emotion Recognition in Workplace/Education **Banned:** AI systems that infer emotions of workers or students, except for medical or safety reasons.

Real-world examples: - AI interview tools that analyze candidate facial expressions - Employee monitoring software that tracks engagement via webcam - Student attention monitoring during online classes

Key nuance: Medical and safety exceptions exist (e.g., monitoring a pilot's alertness). But general workplace emotion monitoring is banned.

7. Biometric Categorization Inferring Sensitive Attributes **Banned:** AI that categorizes people based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.

Real-world example: A system that analyzes facial features to infer ethnicity or a voice analysis system that claims to detect sexual orientation.

8. Real-Time Remote Biometric Identification in Public Spaces **Banned:** Using real-time biometric identification (like facial recognition cameras) in publicly accessible spaces for law enforcement purposes.

Key nuance: There are narrow exceptions for specific serious crimes (terrorism, kidnapping), but these require prior judicial authorization and are tightly restricted.

What To Do If You're Affected

  1. Stop immediately — these are already enforceable
  2. Document your decision — show you identified and removed the practice
  3. Scan your stack — some AI tools use emotion recognition or biometric categorization without clearly disclosing it
  4. Check your vendors — if a tool you deploy uses a prohibited practice, you're liable as the deployer

The penalties for prohibited practices are the highest in the Act: €35M or 7% of global annual turnover.

Check your compliance status

Scan your AI product against the EU AI Act framework in 60 seconds.

Scan Now