Standardised bias? The role – and limits – of European standards bodies in the EU’s Artificial Intelligence Act
Acknowledgements
This op-ed arose from the workshop on Automation in Legal Decision-Making: Regulation, Technology, Fairness organised on 14 November 2024 at the University of Copenhagen under the aegis of the LEGALESE project. The workshop aimed at inspiring critical thinking about the growing role of automation in legal decision-making and its implications for fairness and explainability in legal decision-making. Participants from multiple disciplines were invited to contribute and discuss blog posts reflecting on a range of pressing themes, including the role of human oversight, the challenges of explainability, and the importance of interdisciplinary dialogue for effective and responsible innovation. By engaging with both cautionary tales and constructive approaches, the event fostered a space for discussion on how legal, technical, and societal perspectives can come together to shape ‘fair’ ADM practices.
Funding: The authors received funding from the Innovation Fund Denmark, grant no 0175-00011A.
One of the most pressing challenges addressed by the Artificial Intelligence Act (AIA) is the issue of bias in high-risk AI systems. Article 10 of the AIA requires that training, validation, and testing datasets be managed under practices appropriate to the system’s intended purpose. These include examining possible sources of bias, and taking steps to detect, prevent, and mitigate certain biases. But whilst Article 10 lays out important principles, much of the technical detail is left to standard-setting. It will ultimately be up to European standardisation bodies, with input from international standards, to define what ‘appropriate’ data governance looks like in practice. In effect, the AIA outsources many of its difficult normative questions – including how to assess and reduce bias – to bodies with primarily technical expertise. This op-ed analyses the challenges faced by standards organisations in attempting to offer meaningful guidance as to how AI system providers can concretely meet and interpret the AIA’s requirements.
Standards as the bridge between legal mandates and technical guidance
Technical standards are to play a critical role in bridging the gap between law and practice in the AIA. There are many clear benefits to relying on standards rather than overburdening lawmakers to come with all the answers in a complex and fast-moving technological space. Keeping the ‘hard law’ high-level and technology neutral allows lawmakers to defer to the expertise of more agile norm-makers such as standards organisations to flesh out legal requirements. Standards also offer a more direct pathway for collaboration with international stakeholders (recital 121 & Article 40(3)). Allowing providers to presumptively demonstrate compliance with the AIA’s substantive requirements by adhering to harmonised standards (Article 40(1)) increases legal certainty and should improve the EU’s prospects of winning the ‘hearts and minds’ of those in the global AI value chain.
In 2023 the European Commission made official standardisation requests to CEN and CENELEC, the key European standards bodies, one of which relates to ‘governance and quality of datasets used to build AI systems’, including ‘procedures for detecting and addressing biases and potential for proxy discrimination or any other relevant shortcomings in data’. The request further states that:
‘European standardisation deliverables shall reflect the generally acknowledged state of the art in order to minimise risks to the health and safety and fundamental rights of persons as guaranteed in the Charter of Fundamental Rights of the European Union as well as in applicable EU law aiming to protect fundamental rights that arise from the design and development of AI systems in view of their intended purpose.’ (C(2023) 3215 final, Annex II)
Since then, the Joint Technical Committee 21 (JT21) of CEN/CENELEC has approved a technical recommendation on Data governance and quality for AI within the European context (CEN/CLC/TR 18115:2024); and a technical specification: CEN/CLC ISO/IEC/TS 12791:2024 Information technology – Artificial intelligence – Treatment of unwanted bias in classification and regression machine learning tasks (ISO/IEC TS 12791:2024), which as the name indicates is based on an existing ISO Technical Specification (TS). Importantly, both documents are not standards but rather technical recommendations/specifications. Other documents such as (WI=JT021037) Quality and governance of datasets in AI; and (WI=JT021036) Concepts, measures and requirements for managing bias in AI systems, are still in the drafting process.
The limits of standardisation: Feasibility, legitimacy, and fundamental rights
The AIA mandates consideration of existing international AI standards (Article 40(3)), allowing CEN/CENELEC to draw from ISO instruments. However, there are strong challenges facing the current drafting process at CEN/CENELEC, both for AIA’s harmonised standards generally, and for bias specifically. Bias (and fairness) in AI is a highly contested concept, even (or perhaps especially) in technical fields. Reaching consensus on the nature of bias and its proper mitigation is a complex and politically sensitive task for an international standards body like ISO, and one which the currently available technical recommendation and specification avoid engaging with head-on.
On one hand, tackling this issue should be simpler for European standardisation organisations, since these normative issues are guided by EU fundamental rights considerations, and there is no need to reach international consensus. On the other, the task remains formidable. AI is used in myriad contexts, and bias/fairness concerns are highly contextual, meaning that significant documentary output is likely required to meaningfully assist AI system providers in achieving compliance in different scenarios. Moreover, operationalising fundamental rights into technical processes producible in standards documents will be challenging. Commentators have questioned whether technical steps can account for the complexity and context-dependent nature of fundamental rights, especially as relates to non-discrimination. CEN/CENELEC risks providing standards too vague to offer meaningful guidance, or offering guidance too prescriptive to accommodate the diversity of AI systems and societal values.
These difficulties are compounded by questions of legitimacy, as CEN/CENELEC are tasked with resolving complex normative issues left open by the legislature. Some commentators have argued that standards organisations are faced with a dilemma: either address these questions independently (an approach for which they arguably lack legitimacy), or achieve consensus amongst stakeholders with diverging interests. Given the state-of-the-art on bias detection and mitigation is far from settled, the latter may be impossible.
Some of these problems have reached the attention of the Commission, which released an updated draft implementing decision in January 2025. This document acknowledged the difficulties of the fundamental rights dimension into technical standards, instructing CEN/CENELC to ‘gather relevant expertise’ in fundamental rights and data protection, and set clearer parameters around acceptable definitions and references to other standards – particularly to avoid uncritical adoption of ISO materials that may conflict with the AIA’s intent. Whilst the Commission’s recent intervention here is welcome, the feasibility of answering complex problems such as bias, and the ability and legitimacy of standards organisations to do so, has been questioned since the AIA was in the drafting stage.
Standardising bias: the challenge ahead
The challenge of addressing bias and fairness of AI systems highlights the broader difficulty of embedding normative values into technical standards. These concepts are deeply contextual, politically sensitive, and resistant to reduction into universal procedures – yet standardisation bodies are tasked with doing just that. Whilst CEN/CENELEC will lead this effort under the AIA, they must do so within a global AI ecosystem, managing both legal obligations and international expectations. The wisdom of the lawmaker in deferring these questions to standards organisations is easy to criticise; more difficult is coming with potential solutions within the EU’s chosen AI governance framework. Given that not all harmonised standards have been drafted, and the AIA provisions on high-risk AI are some ways from being enforceable, there remains time for deliberation and course-correction. It may prove that the Commission’s initial standardisation request prompts further important work in defining and mitigating bias from technical, legal, and ethical perspectives.






