AIGovHub
Vendor Tracker
CCM PlatformSentinelProductsPricing
AIGovHub

The AI Compliance & Trust Stack Knowledge Engine. Helping companies become AI Act-ready.

Tools

  • AI Act Checker
  • Questionnaire Generator
  • Vendor Tracker

Resources

  • Blog
  • Guides
  • Best Tools

Company

  • About
  • Pricing
  • How We Evaluate
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  • Affiliate Disclosure

© 2026 AIGovHub. All rights reserved.

Some links on this site are affiliate links. See our disclosure.

Article 5(1)(d) EU AI Act: The Predictive Policing Ban Explained
EU AI Act
predictive policing
AI risk assessment
Article 5
AI governance

Article 5(1)(d) EU AI Act: The Predictive Policing Ban Explained

AIGovHub EditorialMarch 11, 202610 views

Introduction: The EU's Landmark Ban on Predictive Policing AI

The EU AI Act, Regulation (EU) 2024/1689, establishes the world's first comprehensive legal framework for artificial intelligence. Among its most significant provisions is Article 5(1)(d), which prohibits AI systems that assess or predict the likelihood of individuals committing criminal offenses based solely on profiling. This prohibition, which applies from 2 February 2025, represents a critical boundary in AI governance, aiming to prevent algorithmic bias and protect fundamental rights in justice systems. For compliance officers and AI developers, understanding this ban is essential to avoid severe penalties—up to EUR 35 million or 7% of global annual turnover for prohibited practices.

This article provides an in-depth analysis of Article 5(1)(d), breaking down its legal text, practical implications, and compliance strategies. We'll explore how this prohibition fits within the EU AI Act's risk-based approach and what it means for businesses operating in or with the European Union.

Section 1: Breaking Down Article 5(1)(d) – Key Definitions and Scope

Article 5(1)(d) prohibits "AI systems that assess or predict the likelihood of natural persons committing criminal offenses based solely on the profiling of a natural person or on assessing their personality traits and characteristics." Let's examine the key components of this prohibition.

What Constitutes "Assessment or Prediction of Criminal Offenses"?

The prohibition targets AI systems designed to evaluate or forecast whether an individual will commit a crime. This includes tools used by law enforcement authorities, entities acting on their behalf, and EU institutions supporting law enforcement. Importantly, the ban is limited in scope—it does not entirely prohibit crime prediction technologies but focuses on systems that rely exclusively on profiling or personality traits without objective, verifiable facts linked to criminal activity.

According to the research evidence, when AI systems do not meet all conditions for prohibition, they are classified as high-risk AI systems under Annex III of the EU AI Act. This means they are subject to specific requirements and safeguards, including human intervention, rather than being outright banned.

The Critical Role of "Profiling" and "Personality Traits"

The prohibition hinges on the phrase "based solely on profiling." Profiling refers to any form of automated processing of personal data to evaluate certain personal aspects of a natural person, particularly to analyze or predict aspects concerning that person's behavior. Personality traits include characteristics such as openness, conscientiousness, extraversion, agreeableness, and neuroticism.

The rationale emphasizes judging individuals based on actual behavior rather than predicted conduct, aligning with principles of legal certainty and equality before the law in EU criminal law. The prohibition aims to prevent AI systems from making determinations about individuals' criminal propensity based on generalized data patterns rather than specific, verifiable evidence.

Scope and Exceptions

The prohibition applies specifically to:

  • Law enforcement authorities
  • Entities acting on behalf of law enforcement authorities
  • EU institutions supporting law enforcement

Notably, administrative offenses fall outside this prohibition, and national differences in defining 'criminal offenses' may require further clarification. The ban does not apply to AI systems that use profiling as one component among other objective factors when assessing criminal risk.

Section 2: Practical Compliance Challenges for Businesses

Identifying whether your AI system falls under Article 5(1)(d) requires careful analysis. Here are the key compliance challenges organizations face.

Identifying Prohibited Use Cases

Businesses must scrutinize their AI applications to determine if they involve criminal offense prediction based solely on profiling. Potential high-risk areas include:

  • Law enforcement tools: Predictive policing systems that flag individuals as likely to commit crimes based on demographic data, location history, or social media activity
  • Hiring and employment screening: Tools that assess candidates' likelihood of workplace misconduct or theft based on personality assessments
  • Insurance underwriting: Systems that predict policyholders' likelihood of committing insurance fraud based on profiling
  • Financial services: AI that flags customers as potential money launderers based solely on behavioral patterns

For guidance on conducting thorough AI risk assessments, see our guide to modifying AI systems for EU AI Act compliance.

Conducting Risk Assessments Under the EU AI Act Framework

The EU AI Act employs a four-tier risk classification: Unacceptable (banned), High-risk, Limited risk (transparency), and Minimal risk. Article 5(1)(d) falls under the "Unacceptable risk" category. When conducting risk assessments:

  1. Determine if your system assesses or predicts criminal offenses
  2. Identify whether it relies "solely" on profiling or personality traits
  3. Assess whether objective, verifiable facts are incorporated
  4. Document your assessment process thoroughly

If your system doesn't meet all prohibition conditions but still involves criminal risk assessment, it likely qualifies as high-risk under Annex III (area 4: Employment, workers management, and access to self-employment). High-risk AI systems have obligations applying from 2 August 2026, with extended transition until 2 August 2027 for systems embedded in regulated products.

Navigating the Gray Areas

The prohibition's language leaves several gray areas that require careful interpretation:

  • What constitutes "solely" versus "primarily" based on profiling?
  • How much objective evidence is needed to move from prohibited to high-risk classification?
  • How do national variations in criminal law definitions affect compliance?

Businesses should adopt a precautionary approach, erring on the side of caution when uncertainty exists. Regular monitoring of guidance from the EU AI Office (established within the European Commission) and national competent authorities is essential.

Section 3: Strategies for Compliance and Risk Mitigation

Organizations can take several proactive steps to ensure compliance with Article 5(1)(d) and related AI governance requirements.

Implementing Ethical AI Frameworks

Adopting established frameworks helps structure compliance efforts:

  • NIST AI Risk Management Framework (AI RMF 1.0): Use the four core functions (Govern, Map, Measure, Manage) to systematically address AI risks. The companion NIST AI RMF Playbook provides suggested actions.
  • ISO/IEC 42001: Implement this international standard for AI Management Systems (AIMS), which is certifiable and aligned with other ISO standards like ISO 27001.
  • Internal governance structures: Establish clear accountability, with senior management responsible for AI compliance decisions.

For a comprehensive approach, consider our complete guide to AI governance for emerging technologies.

Leveraging Transparency Tools and Documentation

Transparency is crucial for demonstrating compliance:

  • Maintain detailed documentation of AI system design, data sources, and decision logic
  • Implement explainability features that show how systems arrive at conclusions
  • Conduct regular audits to ensure systems aren't drifting toward prohibited profiling
  • Use tools like AIGovHub's AI governance monitoring platform to track regulatory changes and assess compliance status

Utilizing Vendor Solutions for Risk Assessment

Specialized platforms can streamline compliance efforts. Some affiliate vendors offering relevant solutions include:

  • Holistic AI: Provides risk assessment platforms specifically designed for AI governance compliance
  • OneTrust: Offers integrated privacy and AI governance tools that help manage regulatory requirements

For a detailed comparison of AI governance platforms, see our review of the best AI governance platforms for EU AI Act compliance.

Section 4: Enforcement Timelines, Penalties, and Global Comparisons

EU AI Act Enforcement Timeline

Understanding the phased implementation is crucial for compliance planning:

  • 1 August 2024: EU AI Act entered into force
  • 2 February 2025: Prohibited AI practices (Article 5) and AI literacy obligations apply
  • 2 August 2025: Governance rules and obligations for general-purpose AI (GPAI) models apply
  • 2 August 2026: Obligations for high-risk AI systems (Annex III) and transparency obligations apply (full applicability with exceptions)
  • 2 August 2027: Extended transition for high-risk AI systems embedded in regulated products

Each EU Member State must designate a national competent authority for enforcement. The EU AI Office oversees GPAI and coordinates enforcement.

Penalties for Non-Compliance

Violations of Article 5(1)(d) carry severe consequences:

  • Up to EUR 35 million or 7% of global annual turnover for prohibited practices
  • For other violations: up to EUR 15 million or 3% of global annual turnover

These penalties apply to the prohibited practice itself, making compliance with Article 5(1)(d) particularly critical.

Global Regulatory Comparisons

While the EU leads in comprehensive AI regulation, other jurisdictions have relevant requirements:

  • United States: No comprehensive federal AI legislation exists as of early 2025. However, Colorado AI Act (SB 24-205), effective 1 February 2026, requires deployers of high-risk AI to use reasonable care to avoid algorithmic discrimination. NYC Local Law 144, effective 5 July 2023, requires bias audits for automated employment decision tools (AEDTs) used in hiring.
  • Illinois: The Artificial Intelligence Video Interview Act, effective 1 January 2020, requires consent and disclosure for AI-analyzed video interviews.
  • Global standards: The NIST AI RMF (January 2023) provides voluntary guidance, while ISO/IEC 42001 (December 2023) offers a certifiable international standard.

The EU's approach is notably more prescriptive than current US regulations, particularly regarding prohibited practices. For insights on how tech giants are responding to these regulations, see our analysis of AI security alerts and enterprise compliance.

Key Takeaways

  • Article 5(1)(d) of the EU AI Act prohibits AI systems that predict criminal offenses based solely on profiling or personality assessments, applying from 2 February 2025.
  • The prohibition targets law enforcement authorities and related entities, emphasizing legal certainty and preventing bias in criminal justice systems.
  • AI systems that don't meet all prohibition conditions but involve criminal risk assessment are classified as high-risk under Annex III, requiring specific safeguards.
  • Compliance requires careful risk assessment, documentation, and potentially re-engineering of AI systems to incorporate objective, verifiable facts.
  • Penalties for violations are severe: up to EUR 35 million or 7% of global annual turnover.
  • Global regulations vary, with the EU taking the most comprehensive approach to AI governance.

Navigating the Future of AI Governance

Article 5(1)(d) represents a significant milestone in AI regulation, establishing clear boundaries for what constitutes unacceptable AI practices. As organizations prepare for the February 2025 deadline, proactive compliance planning is essential. This includes conducting thorough risk assessments, implementing ethical AI frameworks, and staying informed about regulatory developments.

For ongoing guidance and tools to manage EU AI Act compliance, explore AIGovHub's AI governance monitoring platform, which provides real-time regulatory updates, compliance checklists, and risk assessment templates. Our platform helps organizations track the evolving regulatory landscape and implement effective governance strategies.

For a step-by-step implementation roadmap, see our comprehensive EU AI Act compliance roadmap guide.

This content is for informational purposes only and does not constitute legal advice. Some links in this article are affiliate links. See our disclosure policy.