Intermediate

The EU AI Act

The EU AI Act is the world's first comprehensive AI regulation. It establishes a risk-based framework that classifies AI systems by risk level and imposes requirements proportional to the potential for harm.

Risk Classification System

Risk Level Requirements Examples
Unacceptable Risk Banned entirely Social scoring, subliminal manipulation, real-time biometric surveillance (with narrow exceptions), exploitation of vulnerabilities
High Risk Strict requirements: risk management, data governance, documentation, human oversight, accuracy, robustness, cybersecurity Credit scoring, hiring/recruitment, medical devices, critical infrastructure, law enforcement, migration management
Limited Risk Transparency obligations Chatbots (must disclose AI nature), emotion recognition systems, biometric categorization, deepfake generation
Minimal Risk No specific requirements (voluntary codes of conduct encouraged) AI-powered spam filters, video game AI, inventory management systems

Prohibited Practices

The EU AI Act bans the following AI practices outright:

  • Social scoring: Government use of AI to evaluate citizens' trustworthiness based on social behavior
  • Subliminal manipulation: AI techniques that manipulate behavior through subliminal means, causing harm
  • Exploitation of vulnerabilities: AI that exploits age, disability, or socioeconomic status to distort behavior
  • Real-time remote biometric identification: In public spaces by law enforcement (with narrow exceptions for terrorism, missing children, serious crimes)
  • Emotion recognition: In workplaces and educational institutions (with limited medical/safety exceptions)
  • Untargeted facial recognition scraping: Building facial recognition databases from internet/CCTV without consent

High-Risk AI Requirements

  1. Risk Management System

    Continuous, iterative process for identifying, analyzing, estimating, and evaluating risks throughout the AI system lifecycle.

  2. Data Governance

    Training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Bias examination is mandatory.

  3. Technical Documentation

    Detailed documentation of the AI system's design, development process, capabilities, limitations, and intended purpose.

  4. Record-Keeping

    Automatic logging of events (audit trails) during the AI system's operation to enable post-hoc analysis.

  5. Transparency

    Clear instructions for deployers, including intended purpose, level of accuracy, and known limitations.

  6. Human Oversight

    Measures enabling human oversight, including the ability to understand, monitor, and override AI decisions.

  7. Accuracy, Robustness, and Cybersecurity

    Systems must achieve appropriate levels of accuracy, be resilient to errors and adversarial attacks, and include cybersecurity measures.

GDPR Intersection

GDPR + AI Act: The AI Act does not replace GDPR. Organizations must comply with both. GDPR's data protection requirements (lawful basis, data minimization, right to explanation) apply to AI systems that process personal data. The AI Act adds AI-specific requirements on top of GDPR obligations.

Enforcement and Penalties

Prohibited Practices

Fines up to 35 million euros or 7% of global annual turnover (whichever is higher) for violations of banned AI practices.

High-Risk Non-Compliance

Fines up to 15 million euros or 3% of global annual turnover for failing to meet high-risk AI system requirements.

Incorrect Information

Fines up to 7.5 million euros or 1% of global annual turnover for supplying incorrect or misleading information to authorities.

SME Provisions

Lower penalty caps for small and medium enterprises, with the lesser of the two figures (fixed amount vs. percentage) applying.