← Back to scanner

AI Act Risk Assessment

The EU AI Act classifies AI systems into four risk tiers, each with different compliance requirements. Understanding where your AI systems fall is the first step to compliance.

Prohibited

Article 5

AI practices that are banned outright under the EU AI Act.

Examples: Social scoring by governments, subliminal manipulation, exploitation of vulnerable groups, untargeted facial recognition scraping, real-time biometric identification in public spaces.

High Risk

Article 6 + Annex III

AI systems that must meet strict requirements before market placement.

Examples: AI in hiring/recruitment, credit scoring, medical devices, education assessment, law enforcement profiling, critical infrastructure management.

Limited Risk

Article 50

AI systems with transparency obligations — users must be told they're interacting with AI.

Examples: Chatbots, AI-generated content (text, images, video), emotion recognition systems, deepfake generators.

Minimal Risk

No mandatory requirements

Most AI systems fall here. No compliance obligations, but voluntary codes of conduct are encouraged.

Examples: Spam filters, AI-powered search, recommendation engines for entertainment, game AI.

Check your product now — free

No sign-up required. Results in 60 seconds.