Guide

EU AI Act Standard Setting: A Complete Guide to CEN-CENELEC JTC21 and Global Compliance

Updated: March 4, 202611 min read38 views

This guide explains the critical standard-setting process under the EU AI Act, led by CEN-CENELEC JTC21. You'll learn how harmonized standards provide a presumption of conformity, the six-step drafting process, global governance implications including the EU-India partnership, and actionable steps for businesses to prepare for compliance deadlines.

Navigating the EU AI Act requires understanding not just the legal text but the technical standards that will define compliance in practice. The standard-setting process, led by CEN-CENELEC JTC21, is where the rubber meets the road for high-risk AI systems. This guide provides a detailed, step-by-step explanation of how AI safety standards are developed under the EU AI Act, the role of key organizations, and what this means for global AI governance and your compliance strategy.

You'll learn about the formal six-step process for creating harmonized standards, the current timeline and delays, how standards interact with other compliance options like codes of practice, and practical steps to prepare your organization. We'll also explore the global context, including the EU's partnership with India and other nations, and how these standards may influence AI governance worldwide.

Background: Why Standard Setting is Critical for EU AI Act Compliance

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, with obligations for high-risk AI systems applying from 2 August 2026. This landmark regulation categorizes AI systems by risk level: unacceptable (banned), high-risk, limited risk (transparency obligations), and minimal risk. For high-risk AI systems listed in Annex III, compliance requires meeting specific requirements for risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

But how do organizations practically demonstrate compliance with these broad requirements? This is where harmonized standards come in. Under the EU AI Act, the European Commission can request European Standardisation Organizations (ESOs) to develop technical standards that detail specific methods for achieving the safety outcomes mandated by the law. For businesses, using these harmonized standards provides a presumption of conformity with the corresponding legal requirements, significantly simplifying compliance efforts and reducing regulatory uncertainty.

It's important to note that compliance with harmonized standards remains voluntary. Organizations can choose alternative routes, such as developing their own interpretations of the requirements or following codes of practice. However, for most companies—especially those without extensive legal and technical resources—relying on harmonized standards will be the most efficient path to compliance. This makes understanding the standard-setting process essential for any organization developing or deploying AI in the EU market.

The Standard-Setting Mechanics: CEN-CENELEC JTC21's Six-Step Process

The European Commission designated CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization) as the primary ESOs for developing AI standards under the EU AI Act. These organizations operate through Joint Technical Committee 21 (JTC21), which is responsible for AI standardization.

The Formal Six-Step Process

The standard-setting process follows a structured approach with multiple checks and balances:

  1. Commission Request: The European Commission issues a formal standardization request (mandate) to the ESOs. For the EU AI Act, the Commission requested standards for high-risk AI provisions in 2021, though drafting has faced delays.
  2. Drafting by ESOs: CEN-CENELEC JTC21 forms working groups of technical experts from industry, academia, consumer organizations, and other stakeholders to develop draft standards.
  3. Enquiry: The draft standards are made available for public comment, allowing broader stakeholder input and transparency.
  4. Formal Vote: National Standards Bodies from EU member states vote on the final draft standards.
  5. Publication by ESOs: Once approved, CEN and CENELEC publish the harmonized standards.
  6. Assessment by the Commission: The European Commission assesses whether the published standards satisfy the requirements of the standardization request and, if so, publishes references to them in the Official Journal of the EU, giving them legal effect as harmonized standards.

Key Actors and Current Status

The process involves multiple stakeholders: the European Commission (which initiates and oversees), CEN-CENELEC JTC21 (which develops), National Standards Bodies (which represent member state interests), and various stakeholders (who provide technical expertise and feedback). The initial batch of standards is expected to cover high-risk AI systems, with future standards potentially addressing other parts of the AI Act.

Organizations should note that the standard-setting process has faced delays since the Commission's 2021 request. While codes of practice for general-purpose AI (GPAI) models are expected by 2 May 2025, and governance rules for GPAI apply from 2 August 2025, the harmonized standards for high-risk AI systems are still in development. Businesses should monitor progress closely, as these standards will be essential for compliance by the 2 August 2026 deadline for high-risk AI obligations.

For real-time tracking of standard development and compliance deadlines, platforms like AIGovHub can provide automated monitoring and alerts.

Global Context: EU-India Partnership and International AI Governance

The EU's approach to AI standardization doesn't exist in a vacuum. As demonstrated by Executive Vice-President Henna Virkkunen's participation in the AI Impact Summit 2026 in New Delhi, the EU is actively shaping global AI governance through international partnerships. This engagement highlights several important trends for businesses operating across borders.

The EU-India AI Partnership

At the 2026 summit, the EU emphasized strengthening its partnership with India to drive economic growth and geopolitical influence in AI. Key initiatives included:

  • Participation in industry roundtables on EU-India cooperation in AI skills and talent mobility
  • Launch of the European Legal Gateway Office pilot to connect European companies with India's ICT talent base
  • Engagement in sessions on the AI Code of Practice and AI innovation

These efforts aim to accelerate AI deployment, scale innovation, and ensure AI remains human-centric, secure, and aligned with democratic values through trusted partnerships. For businesses, this means that EU AI standards may increasingly influence or align with approaches in other major markets, creating potential for harmonized global compliance strategies.

Broader International Implications

The EU's standard-setting approach represents one model for AI governance among several emerging globally. While the US currently lacks comprehensive federal AI legislation as of early 2025 (following the revocation of Executive Order 14110), states like Colorado have enacted their own laws, with the Colorado AI Act effective 1 February 2026. Other frameworks like the voluntary NIST AI Risk Management Framework (published January 2023) and the certifiable ISO/IEC 42001 standard (published December 2023) offer complementary approaches.

For multinational companies, this creates a complex compliance landscape. However, the EU's active international engagement—including meetings with partners from Canada, New Zealand, the United States, Australia, the United Kingdom, Brazil, and Morocco—suggests growing convergence around core principles like human-centric design, security, and democratic alignment. Businesses should consider how EU standards might serve as a foundation for broader global compliance programs.

Compliance Steps for Businesses: Preparing for Standard Implementation

With high-risk AI obligations applying from 2 August 2026, businesses need to start preparing now. Here's a practical roadmap:

Step 1: Conduct an AI Inventory and Risk Assessment

Identify all AI systems in your organization and classify them according to the EU AI Act's risk categories. Pay special attention to systems that might fall under Annex III's high-risk categories. Document their purposes, data sources, technical characteristics, and deployment contexts. This inventory forms the foundation for all subsequent compliance efforts.

Step 2: Monitor Standard Development Progress

Stay informed about the status of harmonized standards being developed by CEN-CENELEC JTC21. Subscribe to updates from National Standards Bodies, participate in public enquiries when possible, and consider using compliance platforms that track regulatory developments. Remember that while the initial focus is on high-risk AI standards, future standards may address other aspects of the AI Act.

Step 3: Evaluate Compliance Pathways

Determine whether your organization will rely primarily on harmonized standards (for presumption of conformity) or pursue alternative approaches like developing internal interpretations or following codes of practice. For most organizations, a hybrid approach may be optimal—using standards where available and supplementing with internal controls where needed.

Step 4: Implement Governance Structures

Establish clear accountability for AI compliance, including roles, responsibilities, and reporting lines. Consider implementing an AI Management System aligned with ISO/IEC 42001, which provides a structured framework for governance. Integrate AI risk management with existing processes for data protection (GDPR compliance, including DPIAs for high-risk processing) and cybersecurity.

Step 5: Prepare Technical Documentation

Start developing the technical documentation required for high-risk AI systems, including information on the system's purpose, development process, risk management measures, data governance, performance metrics, and human oversight mechanisms. Even before final standards are published, you can align with general principles from the AI Act and related frameworks like the NIST AI RMF (with its Govern, Map, Measure, Manage functions).

Step 6: Plan for Conformity Assessment

Determine whether your high-risk AI systems will require third-party conformity assessment or if you can self-certify. Develop testing and validation procedures, and consider how you'll demonstrate compliance to regulators, customers, and other stakeholders.

Integrated governance platforms like OneTrust and Vanta can help streamline these compliance steps by providing frameworks for documentation, risk assessment, and control implementation.

Implementation Checklist: Preparing for EU AI Act Standards

Use this checklist to track your organization's progress:

  • ✓ Inventory Completion: Documented all AI systems with risk classifications
  • ✓ Monitoring Setup: Established process to track standard development by CEN-CENELEC JTC21
  • ✓ Pathway Decision: Determined primary compliance approach (standards vs. alternatives)
  • ✓ Governance Established: Assigned AI compliance responsibilities and reporting structure
  • ✓ Documentation Started: Begun technical documentation for high-risk AI systems
  • ✓ Assessment Planned: Identified conformity assessment requirements and procedures
  • ✓ Training Scheduled: Planned AI literacy training for relevant staff (required by 2 February 2025)
  • ✓ Global Alignment: Considered how EU compliance integrates with other frameworks (NIST, ISO, etc.)

Common Pitfalls to Avoid

Organizations often encounter these challenges when preparing for AI Act compliance:

  • Waiting Too Long: With standards still in development, some businesses delay preparation until they're finalized. This is risky given the 2 August 2026 deadline for high-risk AI obligations. Start building foundational governance now.
  • Overlooking Global Implications: Focusing solely on EU compliance without considering how standards might affect operations in other regions. The EU's international partnerships suggest its approach will influence global norms.
  • Underestimating Resource Needs: Compliance requires cross-functional collaboration between legal, technical, data, and business teams. Ensure adequate resources and executive sponsorship.
  • Ignoring Alternative Pathways: While harmonized standards provide presumption of conformity, they're not the only option. Understand when codes of practice or internal interpretations might be more appropriate for your specific systems.
  • Neglecting Existing Frameworks: Failing to leverage alignment with GDPR (in effect since 25 May 2018), ISO/IEC 42001, or NIST AI RMF. These can provide valuable foundations for AI Act compliance.

Frequently Asked Questions

What happens if harmonized standards aren't ready by the compliance deadline?

If harmonized standards aren't published by 2 August 2026 when high-risk AI obligations apply, organizations must still comply with the AI Act's requirements. They would need to rely on alternative approaches, such as developing their own interpretations or following codes of practice. The European Commission may provide additional guidance in such scenarios, but businesses should prepare for this possibility by building robust internal compliance programs.

How do EU AI Act standards relate to other frameworks like ISO/IEC 42001?

EU harmonized standards are specifically designed to provide presumption of conformity with the AI Act's legal requirements. ISO/IEC 42001 is a broader, certifiable international standard for AI Management Systems that can help organizations implement the governance structures needed for compliance. Many organizations will use ISO/IEC 42001 as their management system framework while applying EU-specific standards for technical requirements. They're complementary rather than competing approaches.

Can non-EU companies participate in the standard-setting process?

Yes, the standard-setting process through CEN-CENELEC JTC21 allows participation from stakeholders worldwide, though EU member states have particular influence through their National Standards Bodies. International companies affected by the AI Act can engage through industry associations, technical committees, or public consultations during the enquiry phase. This is especially important given the EU's global partnerships and the extraterritorial reach of the AI Act.

What are the penalties for non-compliance with the EU AI Act?

The AI Act establishes significant penalties: up to EUR 35 million or 7% of global annual turnover for prohibited practices (applying from 2 February 2025), and up to EUR 15 million or 3% for other violations. These apply to failures to meet requirements for high-risk AI systems, transparency obligations, and other provisions. Using harmonized standards can help demonstrate compliance and reduce enforcement risk.

Next Steps: Assessing Your AI Governance Strategy

The EU AI Act represents a fundamental shift in how AI is regulated, with technical standards playing a crucial role in translating legal requirements into practical implementation. As CEN-CENELEC JTC21 continues its work, businesses must proactively prepare for compliance deadlines while considering the global implications of these standards.

Start by assessing your current AI governance maturity against the checklist provided. Identify gaps in inventory, monitoring, documentation, and governance structures. Consider how integrated compliance platforms can streamline your preparation, especially for tracking standard developments and managing documentation across multiple frameworks.

For a detailed roadmap on implementing EU AI Act requirements, see our comprehensive implementation guide. To compare governance platforms that can support your compliance efforts, check our review of the best AI governance platforms.

Remember: prohibited AI practices and AI literacy obligations apply from 2 February 2025, with high-risk AI system obligations following on 2 August 2026. The time to prepare is now. Use AIGovHub's resources to assess your strategy, monitor regulatory developments, and build a compliant, ethical AI program that meets both EU requirements and global expectations.

This content is for informational purposes only and does not constitute legal advice. Organizations should verify current timelines and requirements with qualified legal counsel.