← All posts
Industry GuideFebruary 25, 20266 min

EU AI Act & Healthcare AI: What Medical Device Companies Need to Know

Healthcare AI sits at the intersection of the EU AI Act and existing medical device regulations. If you build AI for diagnostics, treatment recommendations, or clinical decision support, here's how the two frameworks interact.

Why Healthcare AI is High-Risk

The EU AI Act classifies AI as high-risk when it's a safety component of a product covered by EU harmonization legislation — including the Medical Devices Regulation (MDR) and In Vitro Diagnostics Regulation (IVDR).

This means virtually all AI in medical devices is automatically high-risk under the AI Act.

Additionally, Annex III explicitly lists these healthcare-adjacent use cases: - AI systems intended to be used for determining access to health insurance - AI used in emergency services dispatch - AI for prioritizing emergency response

The Double Regulation Challenge

Healthcare AI companies face compliance with both:

Medical Devices Regulation (MDR/IVDR): - Clinical evaluation and performance studies - CE marking through notified bodies - Post-market surveillance - Vigilance reporting

EU AI Act (for high-risk): - Risk management system (Art. 9) - Data governance (Art. 10) - Technical documentation (Art. 11) - Logging and traceability (Art. 12) - Transparency (Art. 13) - Human oversight (Art. 14) - Accuracy and robustness (Art. 15)

Where They Overlap (Good News)

If you're already MDR-compliant, you have significant overlap:

  • MDR clinical evaluation maps to AI Act risk assessment
  • MDR technical documentation covers much of Annex IV requirements
  • MDR post-market surveillance aligns with AI Act monitoring
  • MDR vigilance reporting overlaps with serious incident reporting (Art. 73)

The EU AI Act explicitly states that conformity assessment procedures under existing legislation can fulfill AI Act requirements where they overlap.

Where the AI Act Adds New Requirements

Bias and Fairness (Art. 10) MDR doesn't explicitly require bias testing across demographic groups. The AI Act requires documented examination of training data for bias — particularly important for AI diagnostics that may perform differently across ethnicities, ages, or genders.

Explainability (Art. 13) MDR requires instructions for use, but the AI Act's transparency requirements are more specific about AI behavior — users must understand the system's capabilities, limitations, and decision-making logic.

Human Oversight Design (Art. 14) MDR assumes physician oversight but doesn't mandate specific AI oversight mechanisms. The AI Act requires designed-in measures for humans to understand, interpret, and override AI outputs — relevant for clinical decision support systems.

Practical Steps

  1. Map your existing MDR documentation against AI Act Articles 8-15
  2. Identify gaps — typically in bias testing, explainability, and oversight design
  3. Augment, don't duplicate — build on your MDR compliance rather than creating parallel documentation
  4. Engage your notified body early — they'll need to understand the AI Act intersection
  5. Scan your AI systems to identify specific compliance gaps with article-level precision

Timeline Consideration

Healthcare AI companies should note: the EU AI Act's obligations for AI in Annex I products (which includes medical devices) are enforced from August 2027, one year later than the general high-risk deadline of August 2026. But starting now gives you the runway to do this right.

Check your compliance status

Scan your AI product against the EU AI Act framework in 60 seconds.

Scan Now