← All posts
ExplainerMarch 20, 20266 min

The 4 Risk Levels of the EU AI Act: Where Does Your Product Fall?

The EU AI Act takes a risk-based approach to regulating AI. Not all AI systems are treated equally — the obligations you face depend on which risk tier your system falls into.

The Four Tiers

Prohibited (Article 5) These AI practices are banned outright in the EU: - Social scoring by public authorities - Subliminal manipulation techniques - Exploitation of vulnerable groups - Untargeted facial image scraping - Emotion recognition in workplace/education settings - Real-time remote biometric identification in public spaces (with narrow exceptions)

Enforcement: Already active since February 2025.

High-Risk (Article 6 + Annex III) AI systems in these domains face the strictest requirements: - **Biometrics**: Remote identification, categorization - **Critical infrastructure**: Energy, water, transport, digital - **Education**: Admission, assessment, monitoring - **Employment**: Recruitment, screening, evaluation, promotion, termination - **Essential services**: Credit scoring, insurance, emergency dispatch - **Law enforcement**: Risk assessment, evidence analysis - **Migration**: Border control, visa processing - **Justice**: Legal research, dispute resolution

Requirements: Risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity measures.

Limited Risk (Transparency Obligations) - Chatbots must disclose they're AI - AI-generated content must be labeled - Deepfakes must be disclosed - Emotion recognition systems must inform subjects

Minimal Risk No mandatory requirements. Voluntary codes of conduct encouraged. Most AI systems fall here — spam filters, recommendation engines, game AI.

The Exception: Article 6(3)

A system matching Annex III high-risk categories may be exempt if ALL of these conditions are met: - Does not pose significant risk to health, safety, or fundamental rights - Does not materially influence decision-making - Is intended for a narrow procedural task - Is preparatory to an assessment (not the assessment itself)

This exception is narrow and must be documented.

How to Check Your Classification

The fastest way to determine where your AI product falls is to scan it against the full framework. An automated compliance checker can analyze your product, identify every AI system, and classify each one under the correct risk tier with specific article references.

Check your compliance status

Scan your AI product against the EU AI Act framework in 60 seconds.

Scan Now