European standardization is hitting a wall. The EU’s landmark AI-Act has been in force since August 2024, yet companies across the continent are still left in the dark about how to comply in practice. The reason? The very system responsible for translating regulation into operational standards—CEN and CENELEC—is structurally unfit for the pace and complexity of Modern technologies like artificial intelligence.
This mismatch between regulatory ambition and technical implementation risks turning Europe’s AI strategy into an exercise in symbolic politics rather than meaningful governance.
What is CEN/CENELEC?
CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization) are Europe’s official bodies for creating technical standards. Founded in 1961 and 1973 respectively, both organizations are headquartered in Brussels and represent 34 European countries.
Their model has proven effective for traditional industries:
- Electrical engineering: well-defined physical laws and measurable thresholds.
- Mechanical engineering: long-established safety norms and test procedures.
- Chemicals and food safety: quantifiable risks, decades of regulatory precedent.
But artificial intelligence defies this logic. AI systems are context-dependent, rapidly evolving, and highly heterogeneous. What makes sense for an image-recognition model does not apply to a generative chatbot—or an autonomous vehicle.
The AI Act Schedule: A Predictable Failure
The European Commission tasked CEN and CENELEC in May 2023 with developing the standards that would give teeth to the AI-Act. A new Joint Technical Committee 21 (JTC 21) was created, with a delivery deadline of April 30, 2025.
That deadline has already collapsed.
- August 2024: JTC 21’s chairman acknowledged an eight-month delay.
- May 2025: It became evident that most standards would not be in place until after August 2026—two years After the AI-Act had entered into force.
The situation was so dire that the European Commission even considered suspending enforcement of the law.
The result: companies are legally bound by rules that cannot be operationalized in practice.
Why Standards Always Come After Regulation
Several structural issues explain why CEN/CENELEC cannot deliver on the AI Act’s timelines.
1. Complexity of AI
AI is not like electricity or food chemistry. There are no universal measurement thresholds. “Risk” in AI depends on the application, the data, and the context. Defining safety criteria for a medical diagnostic AI has little overlap with regulating a recommendation system in e-commerce.
2. Voluntary, Consensus-Based Process
CEN and CENELEC rely on the voluntary participation of more than 200,000 experts. These experts must achieve consensus across national delegations and industry stakeholders—a slow, painstaking process, particularly on controversial and fast-moving topics like AI ethics, bias, or transparency.
Confidentiality rules further limit transparency, leaving companies and the public guessing about progress until drafts emerge years later.
3. Process Overhead
The six-step standardization process was built for stable technologies. From initial inquiry to final publication, years can pass. Even worse, the “standstill rule” prevents member states from developing national standards while a European process is ongoing blocking faster local solutions.
The Consequences for Companies
For businesses, this delay creates a regulatory paradox: they must comply with the AI Act without knowing what compliance looks like.
- Large corporations can afford to create internal frameworks, betting that these will later align with official standards.
- Small and Medium-sized enterprises (SMEs) lack such resources, leaving them exposed to legal uncertainty and competitive disadvantage.
The vagueness around terms like “acceptable risk” or “trustworthy AI” means enforcement is essentially arbitrary until standards catch up.
A System at Its Limits
CEN and CENELEC remain indispensable for sectors built on physical and measurable parameters. But when it comes to fast-moving, software-driven fields like AI, they are operating outside their design limits.
The case of the AI Act highlights a deeper structural problem: Europe’s standardization machinery is too slow, too consensus-driven, and too bureaucratic for digital technologies that evolve in real time.
Unless alternative models are developed—such as agile, open, and iterative standard-setting processes that involve industry, academia, and civil society in real time—Europe risks suffocating its AI ambitions under its own procedural weight.
Regulation Without Standards = Symbolic Politics
The AI Act is a bold attempt to shape the global AI landscape with European values of safety, transparency, and accountability. But without the technical standards that operationalize these principles, the regulation risks remaining symbolic.
If Europe wants to lead in AI, it must rethink how it links regulation to implementation. Otherwise, the AI Act may end up as a cautionary tale of good intentions undone by outdated bureaucratic machinery.