The EU AI Act (Regulation EU 2024/1689) specifically classifies AI used in recruitment and hiring as "high-risk." If your organization uses AI in CV screening, candidate scoring, interview analysis, or automated shortlisting, there are specific compliance obligations you must meet before August 2, 2026.
Who This Applies To
The critical point most HR teams miss: these obligations apply to deployers (companies using AI tools), not just the vendors building them. Saying "our ATS vendor handles compliance" is not sufficient under the Act.
The EU AI Act has extraterritorial scope. If your AI systems process candidates based in the EU, you must comply — regardless of where your company is headquartered.
The Four Key Obligations for HR
1. Transparency (Article 13) Candidates must know AI is involved in their application process. This needs to be clear and upfront — not buried in a privacy policy footnote.
2. Human Oversight (Article 14) You must implement meaningful human oversight measures. This means documented processes showing humans can understand, interpret, and override AI decisions. A rubber-stamp "approve" button does not count.
3. Bias Testing (Article 10) You need evidence that your AI tools are not discriminating across protected characteristics. This requires documented bias audits and quality criteria for training data.
4. Record-Keeping (Article 12) Maintain logs of how AI systems are being used and their outputs. You need traceability throughout the AI system's lifecycle.
What About Emotion Recognition?
Here's what catches many HR teams off guard: emotion recognition in the workplace is a prohibited practice under Article 5(1)(d). If you're using any AI tool that analyzes candidate facial expressions, tone of voice, or body language during interviews, that's not just high-risk — it may be outright banned.
This prohibition has been enforceable since February 2025.
The Timeline
- February 2025: Prohibited practices enforceable (including emotion recognition in workplace)
- August 2025: GPAI model obligations take effect
- August 2, 2026: Full enforcement for high-risk AI systems (including all recruitment AI)
Penalties
Non-compliance can result in fines up to €35 million or 7% of global annual turnover, whichever is higher.
What To Do Now
- Audit your recruitment stack — identify every tool that uses AI for decision-making
- Check for emotion recognition — if any tool analyzes facial expressions or tone, stop using it immediately
- Document your processes — create transparency notices, human oversight procedures, and bias testing protocols
- Scan your tools — use an automated compliance checker to identify specific gaps against the EU AI Act framework
The deadline is not moving. Start preparing now.