If your company is already GDPR-compliant, you have a head start on the EU AI Act. But the AI Act is not GDPR 2.0 — it regulates different things in different ways.
What GDPR Regulates vs What the AI Act Regulates
GDPR governs what data you can collect, process, and store. It's about personal data protection.
The AI Act governs how your AI system is designed, documented, tested, and deployed. It's about the AI system itself, not just the data it uses.
They overlap but don't replace each other. You need to comply with both.
Where GDPR Gives You a Head Start
Data Processing Records → Technical Documentation GDPR requires Records of Processing Activities (Art. 30). The AI Act requires technical documentation (Art. 11). Your GDPR records are a foundation, but AI Act documentation goes deeper into system design, testing, and validation.
Data Protection Impact Assessment → Fundamental Rights Impact Assessment GDPR's DPIA (Art. 35) assesses risks to personal data. The AI Act's FRIA (Art. 27) assesses risks to fundamental rights from AI decisions. Similar process, different scope.
Right to Explanation → Transparency Obligations GDPR's right not to be subject to automated decision-making (Art. 22) overlaps with the AI Act's transparency requirements (Art. 13). If you're already explaining automated decisions, you're partially covered.
Data Quality → Data Governance GDPR requires data accuracy (Art. 5). The AI Act requires specific data governance for training data (Art. 10) — quality criteria, bias examination, representativeness.
Where the AI Act Adds New Requirements
Risk Classification GDPR doesn't classify processing by risk tier. The AI Act requires you to classify every AI system as prohibited, high-risk, limited, or minimal.
Conformity Assessment High-risk AI systems need a conformity assessment (Art. 43) before being placed on the market. GDPR has no equivalent pre-market approval.
CE Marking High-risk AI systems need a CE marking (Art. 48) — a declaration that the system meets all requirements. This is entirely new.
Human Oversight GDPR mentions human involvement in automated decisions (Art. 22). The AI Act goes much further — requiring designed-in oversight measures, the ability to understand and override AI, and awareness of automation bias (Art. 14).
Post-Market Monitoring The AI Act requires ongoing monitoring of AI systems after deployment (Art. 72) and serious incident reporting (Art. 73). GDPR has breach notification but not product-level monitoring.
The Practical Takeaway
If you're GDPR-compliant: - Your data governance foundation helps with Art. 10 - Your DPIA processes help with Art. 27 - Your documentation habits help with Art. 11 - Your consent and transparency work helps with Art. 13
But you still need to: - Classify your AI systems by risk - Build risk management systems (Art. 9) - Implement human oversight measures (Art. 14) - Create conformity assessments for high-risk systems - Set up post-market monitoring
They're complementary, not duplicative.