If you're a SaaS startup with AI features serving EU customers, here's exactly what you need to do. No legal jargon, just actionable steps prioritized by enforcement deadline.
Already Enforceable (Since Feb 2025)
1. Check for Prohibited Practices Scan your product for any AI that: - Manipulates users subliminally or deceptively - Exploits vulnerabilities (age, disability, economic situation) - Does social scoring - Uses emotion recognition in workplace/education settings - Scrapes facial images from the internet for recognition databases
If you find any of these, stop using them immediately. These are already illegal.
2. Audit Emotion Recognition If your product analyzes user emotions (sentiment analysis on facial expressions, voice tone analysis in calls), check whether it falls under the prohibited workplace/education exception. Most do.
Due by August 2025 (GPAI Obligations)
3. Document Your AI Model Usage If you use GPT, Claude, Gemini, or other GPAI models: - Document which models you use and for what purpose - Understand what your model provider's obligations are (Art. 53) - Know where your obligations as a downstream deployer begin
4. Copyright Compliance Ensure your use of GPAI models respects EU copyright law. This is primarily the model provider's obligation, but downstream usage patterns matter.
Due by August 2026 (High-Risk)
5. Classify Every AI Feature Map each AI feature in your product to a risk tier: - What does it do? - What data does it process? - Who is affected by its outputs? - Does it fall under any Annex III category?
6. Risk Management System (Art. 9) For high-risk features, document: - Identified risks and their likelihood - Mitigation measures implemented - Testing protocols and results - Residual risk assessment
7. Data Governance (Art. 10) Document your AI training data: - Quality criteria - Bias examination procedures - Whether data is representative and relevant
8. Technical Documentation (Art. 11) Create Annex IV documentation: - System design specifications - Development process description - Testing methodology and results - Validation procedures
9. Transparency to Users (Art. 13) - Instructions for use - Intended purpose documentation - Performance limitations - Human oversight measures
10. Human Oversight (Art. 14) Design oversight measures into the system: - Can humans understand the AI's outputs? - Can humans override or interrupt the AI? - Are users aware of automation bias risks?
11. Logging and Traceability (Art. 12) Implement automatic event logging: - What decisions the AI made - What inputs it received - When and how it was used - Retention policy for logs
12. Accuracy and Robustness (Art. 15) Document and test: - Accuracy levels and metrics - Error resilience - Cybersecurity measures - Redundancy plans
The Priority Order
- Now: Stop any prohibited practices (emotion recognition, manipulation)
- This month: Classify all AI features by risk level
- This quarter: Start documentation for high-risk features
- By June 2026: Complete all Articles 8-15 requirements
- By August 2026: Full compliance verified and documented