EU AI Act's Facial Recognition Ban: Untargeted Scraping Prohibition & Compliance Strategies
Introduction: The EU's Red Line on Facial Recognition Databases
Regulation (EU) 2024/1689, known as the EU AI Act, establishes one of the world's first comprehensive legal frameworks for artificial intelligence. Among its most significant provisions is Article 5(1)(e), which creates a clear prohibition against a specific high-risk practice: using AI systems to perform untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases. This prohibition, part of the 'unacceptable risk' category, applies from 2 February 2025. It targets the data collection practices that precede facial recognition itself, addressing fundamental rights concerns linked to mass surveillance. For businesses developing or deploying AI, understanding this ban is critical to avoid penalties of up to EUR 35 million or 7% of global annual turnover. This article provides an in-depth analysis of Article 5(1)(e), its compliance implications for AI compliance 2026, and practical strategies for organizations operating in the EU and beyond.
Understanding Article 5(1)(e): The Four Conditions of Prohibition
Article 5(1)(e) does not ban all facial recognition or all image scraping. Instead, it establishes a precise, cumulative set of conditions that trigger the prohibition. All four must be met for a practice to be illegal under the AI Act.
1. Market Placement, Service, or Use of an AI System
The prohibition applies to providers placing AI systems on the market, putting them into service, or users employing them within the EU. This broad scope covers developers, distributors, and end-users.
2. Intent to Create or Expand a Facial Recognition Database
The activity must be undertaken with the specific purpose of establishing or enlarging a database of facial images designed for recognition purposes. The intent is a key element. General image datasets not intended for facial recognition are not covered.
3. Use of AI for Untargeted Scraping
The AI system must be used for 'untargeted' scraping. This refers to the automated, indiscriminate collection of facial images without a specific, predefined subject. Targeted scraping—such as collecting images of a specific individual for a legitimate purpose with a legal basis—is explicitly excluded from this ban, though it remains subject to other laws like the GDPR.
4. Sourcing from the Internet or CCTV Footage
The images must be scraped from publicly accessible spaces on the internet (social media, websites) or from closed-circuit television footage. The Recitals of the AI Act highlight the particular intrusiveness of harvesting biometric data from these sources without consent.
This prohibition aligns with existing GDPR enforcement. For example, national data protection authorities have fined companies like Clearview AI for similar practices, citing violations of data protection principles. The AI Act now provides a specific, ex-ante legal basis to ban such activities outright. Organizations involved in AI training for applications like deepfake generation or advanced computer vision must carefully audit their data sourcing to ensure they do not inadvertently cross this red line.
Compliance Strategies: A Step-by-Step Guide for 2026
With obligations for high-risk AI systems applying from 2 August 2026, organizations must begin preparing now. Compliance with Article 5(1)(e) requires proactive governance.
Step 1: Conduct a Data Sourcing Audit
Map all data sources used for training or operating AI systems that process facial images. Document the collection method (targeted vs. untargeted), source (internet, CCTV, licensed datasets), and the specific purpose of the database. This audit is foundational for demonstrating compliance and should be integrated into your broader EU AI Act compliance roadmap.
Step 2: Implement an Ethical AI & Data Governance Framework
Establish clear policies that prohibit untargeted scraping of facial images for recognition databases. Integrate these policies into procurement processes, vendor contracts, and developer guidelines. Frameworks like the voluntary NIST AI Risk Management Framework (AI RMF 1.0)—with its four core functions of Govern, Map, Measure, and Manage—provide an excellent structure for building such governance. The NIST AI RMF Playbook offers actionable steps to operationalize these principles.
Step 3: Leverage Technical and Governance Tools
Manual compliance checks are insufficient at scale. Specialized AI governance platforms can automate monitoring and documentation. Tools like Holistic AI or Credo AI help manage AI inventories, assess risks, and ensure data provenance aligns with regulatory requirements. For a detailed comparison of such platforms, see our guide on the best AI governance platforms for EU AI Act compliance.
Step 4: Prepare for Enforcement and Documentation
The EU AI Office, established within the European Commission, will oversee general-purpose AI models and coordinate enforcement. Each EU member state will also designate a national competent authority. Organizations must maintain detailed records of their data governance decisions to demonstrate due diligence if investigated.
Global Comparisons: U.S. State Laws and the Rise of NIST AI RMF
While the EU AI Act sets a direct prohibition, the regulatory landscape in the United States is evolving differently. As of early 2025, there is no comprehensive federal AI legislation. However, U.S. states are increasingly incorporating voluntary technical frameworks into legal requirements.
- Colorado AI Act (SB 24-205): Effective 1 February 2026, this law requires deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination. While it doesn't ban scraping per se, compliance with recognized frameworks like the NIST AI RMF is becoming a benchmark for demonstrating 'reasonable care.'
- New York City Local Law 144: In effect since 5 July 2023, this law requires bias audits for automated employment decision tools (AEDTs).
- Emerging Trend: As noted in research, states like Texas and Montana are proposing or have enacted laws that reference or mandate elements of the NIST AI RMF or ISO/IEC 42001. Courts are also beginning to use these voluntary frameworks to define the 'standard of care' in negligence cases, effectively making them quasi-mandatory for risk mitigation.
This creates a fragmented but significant trend: compliance with the NIST AI RMF is no longer just a best practice but a potential legal shield. Organizations operating transatlantically should consider aligning their AI governance programs with both the EU AI Act's specific prohibitions and the NIST AI RMF's flexible framework to cover multiple jurisdictions. For more on global AI governance trends, explore our complete guide to AI governance for emerging technologies.
Risk Mitigation for High-Risk Sectors
Certain industries face heightened scrutiny under Article 5(1)(e).
Surveillance and Security
Companies providing or using CCTV analytics, crowd monitoring, or public space security systems must ensure their facial recognition databases are not built via untargeted scraping from public footage. Data should be collected with a specific, lawful purpose and, where possible, with consent or under a clear legal authority for targeted collection.
Marketing and Retail
AI used for customer analytics, personalized advertising, or in-store tracking that involves facial recognition must source data ethically. Using scraped social media photos to build customer profile databases is a direct violation. Marketing teams should work closely with legal and compliance units to vet third-party data providers.
AI Development and Research
Startups and research institutions training facial recognition models must use curated, licensed datasets with verified provenance. The era of indiscriminately downloading images from the web for AI training is over. This shift also intersects with copyright law, as seen in ongoing litigation around AI training data. For related insights, read about AI copyright compliance challenges.
Conclusion: Navigating the New Compliance Frontier
Article 5(1)(e) of the EU AI Act represents a decisive move to curb privacy-invasive practices before they scale. Its prohibition on untargeted scraping to build facial recognition databases sets a clear boundary for ethical AI development. As the 2 August 2026 deadline for high-risk system obligations approaches, organizations cannot afford to wait. The convergence of EU regulations with U.S. state laws adopting frameworks like the NIST AI RMF signals a global tightening of AI governance. Proactive steps—data audits, ethical frameworks, and leveraging governance tools—are essential to mitigate enforcement risks and build trustworthy AI systems.
Key Takeaways
- Article 5(1)(e) of Regulation (EU) 2024/1689 bans untargeted scraping of facial images from internet/CCTV to create/expand recognition databases; it applies from 2 February 2025.
- Four cumulative conditions define the prohibition: AI system use, intent for a recognition database, untargeted scraping method, and specific sources.
- Compliance requires data sourcing audits, ethical AI frameworks aligned with tools like the NIST AI RMF, and documentation for enforcement by the EU AI Office and national authorities.
- U.S. states like Colorado (laws effective 1 February 2026) are incorporating voluntary frameworks like the NIST AI RMF into legal standards of care.
- Sectors like surveillance, marketing, and AI development face high risks and must prioritize ethical data sourcing to avoid penalties up to 7% of global turnover.
This content is for informational purposes only and does not constitute legal advice.
Navigating the EU AI Act's complexities requires specialized knowledge and tools. AIGovHub provides resources and analysis to help your organization stay ahead of compliance deadlines. Explore our EU AI Act implementation guide and vendor comparisons to build a robust governance program today.