Guide

EU AI Act Article 5 Prohibited Practices: A Complete Implementation Guide

Updated: March 5, 202610 min read42 views

This guide provides a comprehensive overview of prohibited AI practices under Article 5 of the EU AI Act, including social scoring, real-time biometric surveillance, and predictive policing. Learn the enforcement timeline, compliance steps, and how to navigate interactions with GDPR and DSA.

Introduction: Navigating the EU AI Act's Red Lines

The EU AI Act, Regulation (EU) 2024/1689, establishes the world's first comprehensive legal framework for artificial intelligence. Among its most critical provisions are the "red lines" defined in Article 5—practices considered so harmful to fundamental rights and societal values that they are outright prohibited. This guide provides organizations with a practical implementation framework for navigating these prohibited AI practices, which apply from 2 February 2025, with governance rules and enforcement mechanisms becoming fully applicable from 2 August 2025.

You'll learn:

  • The specific prohibited practices under Article 5 and their enforcement timeline
  • How these prohibitions interact with existing regulations like GDPR and DSA
  • Recent regulatory opinions from the EDPB and EDPS on implementation
  • Security vulnerabilities in AI systems that amplify compliance risks
  • A step-by-step compliance framework with actionable checklists
  • How to distinguish between prohibited and high-risk AI uses

This content is for informational purposes only and does not constitute legal advice.

Understanding the EU AI Act Article 5 Timeline

The EU AI Act follows a phased implementation approach. For Article 5 prohibited practices, organizations must understand two key dates:

  • 2 February 2025: Prohibited AI practices under Article 5 become applicable, along with AI literacy obligations under Article 4.
  • 2 August 2025: Governance rules and obligations for general-purpose AI (GPAI) models apply, strengthening enforcement mechanisms.
  • 2 August 2026: Full applicability of the AI Act, including obligations for high-risk AI systems and transparency requirements.

Penalties for violating Article 5 are severe: up to EUR 35 million or 7% of global annual turnover, whichever is higher. Each EU Member State must designate a national competent authority for enforcement, creating a decentralized but coordinated regulatory landscape.

The Six Prohibited AI Practices Under Article 5

Article 5 identifies six categories of AI practices that are prohibited due to their unacceptable risk to fundamental rights, safety, and democratic values.

1. Social Scoring by Public Authorities

AI systems used by public authorities or on their behalf to evaluate or classify natural persons based on social behavior or personal characteristics, leading to detrimental or unfavorable treatment. This prohibition aims to prevent the creation of surveillance-based social credit systems that could undermine human dignity and autonomy.

2. Real-Time Remote Biometric Identification in Public Spaces

The use of "real-time" remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited, with limited exceptions requiring judicial authorization. These exceptions include targeted searches for specific victims of crime, prevention of specific terrorist threats, or localization of specific suspects of serious crimes.

3. Predictive Policing Systems

AI systems that make individual risk assessments of natural persons to assess the risk of committing a criminal offense based solely on profiling or personality traits. This prohibition addresses concerns about algorithmic bias reinforcing existing inequalities in law enforcement.

4. Emotion Recognition in Workplace and Educational Settings

AI systems that detect emotions or intentions of natural persons in workplace and educational institutions, except where used for medical or safety reasons. This aims to protect against manipulative practices and preserve psychological privacy in environments where power imbalances exist.

5. Exploitation of Vulnerabilities

AI systems that exploit the vulnerabilities of specific groups (children, persons with disabilities, elderly) to materially distort their behavior in a manner that causes physical or psychological harm. This includes manipulative practices that leverage cognitive biases or limitations.

6. Untargeted Scraping of Facial Images

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. This addresses privacy concerns about mass surveillance capabilities.

Interplay with GDPR, DSA, and Regulatory Opinions

The EU AI Act does not operate in isolation. Organizations must consider its interaction with existing frameworks, particularly the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA).

GDPR Takes Precedence

Where AI systems process personal data, GDPR requirements continue to apply fully. The European Commission's non-binding Guidelines emphasize that GDPR takes precedence in matters of personal data protection. Organizations must still conduct Data Protection Impact Assessments (DPIAs) for high-risk processing activities involving AI.

EDPB/EDPS Joint Opinion on Implementation Simplification

The European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) published a joint opinion on the European Commission's Digital Omnibus Proposal, which aims to simplify AI Act implementation. Key concerns include:

  • Risk to GDPR Safeguards: The proposal to extend use of sensitive data for bias correction from high-risk to all AI systems risks undermining GDPR core principles.
  • Reduced Transparency: Removing registration requirements for self-classified non-high-risk systems would reduce transparency and accountability in AI governance.
  • Supervision Gaps: EU-level AI regulatory sandboxes lack mandatory Data Protection Authority involvement when personal data is processed, creating potential supervision gaps.

These opinions highlight that while simplification is needed, it must not compromise fundamental rights protections. Organizations should monitor how these concerns shape final implementation measures.

AI Data Risks and Security Vulnerabilities

Recent incidents highlight the security vulnerabilities that make compliance with Article 5 particularly challenging.

The Data Awareness Gap

Many organizations show reluctance in addressing how AI systems can dangerously expose both business operations and personal data. This gap in understanding and managing data used in AI systems increases risks of breaches, bias, and regulatory penalties. Without proper governance, prohibited practices might be implemented inadvertently through third-party tools or poorly designed systems.

AI System Manipulation Vulnerabilities

Microsoft discovered AI recommendation poisoning affecting 31 companies across 14 industries, demonstrating broad vulnerability in current AI implementations. Turnkey tools make such AI manipulation attacks trivially easy to execute, particularly targeting AI summarization features. This represents a systemic security issue where malicious actors can manipulate AI systems to generate misleading or harmful outputs, potentially violating Article 5 prohibitions against manipulative practices.

These vulnerabilities underscore why technical safeguards and robust governance are essential for Article 5 compliance.

Step-by-Step Compliance Framework

Step 1: Conduct AI Inventory and Risk Assessment

Begin by creating a comprehensive inventory of all AI systems in use, under development, or planned. For each system:

  • Document the purpose, data sources, and processing activities
  • Identify whether the system falls under Article 5 prohibited practices
  • Assess potential impacts on fundamental rights
  • Map data flows and identify GDPR compliance requirements

Tools like AIGovHub's AI governance platform can automate this inventory process and track compliance across multiple systems.

Step 2: Implement Technical Safeguards

Based on your risk assessment, implement appropriate technical measures:

  • Access Controls: Restrict access to sensitive AI systems that could be misused for prohibited purposes
  • Data Minimization: Implement technical measures to ensure only necessary data is collected and processed
  • Bias Detection: Deploy tools to identify and mitigate algorithmic bias that could lead to discriminatory outcomes
  • Security Measures: Protect against manipulation attacks through robust authentication, monitoring, and response mechanisms

Step 3: Establish Governance Policies

Develop clear policies and procedures for AI development and deployment:

  • Prohibited Practices Policy: Explicitly forbid development or use of Article 5 prohibited AI systems
  • Third-Party Due Diligence: Vet vendors and partners for compliance with Article 5 requirements
  • Incident Response Plan: Establish procedures for reporting and addressing potential violations
  • Documentation Requirements: Maintain records demonstrating compliance efforts

Step 4: Train Staff on AI Ethics and Compliance

AI literacy obligations under Article 4 apply from 2 February 2025. Develop training programs that cover:

  • Specific Article 5 prohibited practices and why they're banned
  • Recognizing potential violations in daily operations
  • Reporting procedures for concerns about AI systems
  • Interactions between AI Act, GDPR, and other relevant regulations

Step 5: Document Compliance and Prepare for Audits

Maintain comprehensive documentation demonstrating your compliance efforts:

  • Risk assessment reports and mitigation plans
  • Training records and materials
  • Technical safeguard implementation documentation
  • Third-party due diligence reports
  • Incident response logs and remediation actions

Prohibited vs. High-Risk AI: Understanding the Distinction

The EU AI Act categorizes AI systems into four risk levels: Unacceptable (prohibited), High-risk, Limited risk, and Minimal risk. Understanding the distinction between prohibited and high-risk systems is crucial for appropriate compliance measures.

AspectProhibited AI (Article 5)High-Risk AI (Annex III)
Legal StatusBanned outrightPermitted with strict requirements
ExamplesSocial scoring, real-time biometric surveillance in public spacesAI in recruitment, critical infrastructure, medical devices
Compliance FocusAvoidance and preventionRisk management and mitigation
PenaltiesUp to EUR 35M or 7% global turnoverUp to EUR 15M or 3% global turnover
TimelineApplicable from 2 Feb 2025Obligations apply from 2 Aug 2026

High-risk AI systems in recruitment and HR are classified under Annex III (area 4) and will require conformity assessments, quality management systems, and human oversight when obligations become applicable from 2 August 2026.

Common Pitfalls to Avoid

  • Assuming In-House Development is Exempt: While the AI Act primarily targets providers placing systems on the market, organizations using AI for internal purposes must still comply with Article 5 prohibitions.
  • Overlooking Third-Party Tools: AI systems purchased from vendors or using open-source components must be assessed for Article 5 compliance.
  • Confusing Limited Exceptions: The limited exceptions for real-time biometric identification require judicial authorization and specific circumstances—don't assume broad applicability.
  • Neglecting Employee Training: AI literacy obligations require organizations to ensure staff understand prohibited practices and compliance requirements.
  • Underestimating Documentation Needs: In a decentralized enforcement environment, thorough documentation is essential for demonstrating compliance to national authorities.

Frequently Asked Questions

When do Article 5 prohibited practices actually become illegal?

Prohibited AI practices under Article 5 apply from 2 February 2025. However, enforcement mechanisms and governance rules become fully applicable from 2 August 2025. Organizations should begin compliance efforts immediately to meet the February 2025 deadline.

How does Article 5 interact with existing GDPR requirements?

GDPR continues to apply fully to AI systems processing personal data. The European Commission's Guidelines state that GDPR takes precedence in matters of personal data protection. Organizations must comply with both frameworks, conducting DPIAs where required under GDPR while also ensuring AI systems don't violate Article 5 prohibitions.

Are there any exceptions to the biometric surveillance prohibition?

Yes, but they are narrowly defined. Real-time remote biometric identification in publicly accessible spaces is permitted only for: targeted searches for specific victims of crime; prevention of specific, substantial, and imminent terrorist threats; or detection, localization, identification, or prosecution of perpetrators or suspects of specific serious crimes. These exceptions require judicial authorization and appropriate safeguards.

How should organizations handle AI systems that might fall into gray areas?

When uncertainty exists about whether an AI system constitutes a prohibited practice, organizations should: 1) Conduct a thorough fundamental rights impact assessment, 2) Consult with legal experts familiar with AI regulations, 3) Implement additional safeguards to minimize risks, and 4) Consider whether alternative approaches could achieve the same goal without prohibited elements. When in doubt, a precautionary approach is advisable given the severe penalties for violations.

What about AI systems already in production before February 2025?

The AI Act does not include explicit grandfathering provisions for existing systems. Organizations should assess all AI systems—including those already deployed—against Article 5 requirements and take necessary actions to achieve compliance by the applicable deadlines. This may include modifying, restricting, or decommissioning non-compliant systems.

Next Steps for Your Organization

Navigating Article 5 prohibited practices requires proactive planning and systematic implementation. Begin by conducting a comprehensive AI inventory and risk assessment to identify any systems that might fall under prohibited categories. Develop clear policies and training programs to ensure all staff understand the "red lines" established by the EU AI Act.

For organizations seeking structured guidance, our EU AI Act compliance roadmap provides detailed steps for full regulatory alignment. Additionally, consider leveraging automated compliance tracking through AIGovHub's AI governance platform to maintain ongoing compliance as regulations evolve.

Download our free EU AI Act Article 5 compliance template to jumpstart your implementation efforts, or schedule a vendor assessment consultation to evaluate your current AI systems against Article 5 requirements. With prohibited practices becoming applicable in February 2025, now is the time to establish robust governance frameworks that protect both your organization and fundamental rights.

Some links in this article are affiliate links. See our disclosure policy.