Fairness as crowd-pleaser
Acknowledgements
This op-ed arose from the workshop on Automation in Legal Decision-Making: Regulation, Technology, Fairness organised on 14 November 2024 at the University of Copenhagen under the aegis of the LEGALESE project. While all workshop participants provided useful input to the op-ed, the feedback from Ida Koivisto was especially helpful.
The workshop aimed at inspiring critical thinking about the growing role of automation in legal decision-making and its implications for fairness and explainability in legal decision-making. Participants from multiple disciplines were invited to contribute and discuss blog posts reflecting on a range of pressing themes, including the role of human oversight, the challenges of explainability, and the importance of interdisciplinary dialogue for effective and responsible innovation. By engaging with both cautionary tales and constructive approaches, the event fostered a space for discussion on how legal, technical, and societal perspectives can come together to shape ‘fair’ ADM practices.
Funding: The author received funding from the Innovation Fund Denmark, grant no 0175-00011A.
A sea of flags
Across the fissured terrain of regulatory discourse on automated decision making, flags emblazoned with ‘fairness’ loom large. The same pertains to tech policy more generally. This is not novel. Over 50 years ago, the US federal Department of Health, Education, and Welfare (DHEW) hoisted the fairness flag above the then nascent field of data privacy law, by publishing data privacy principles bearing the title ‘Fair Information Practices’ (DHEW, 1973).
Fairness flags have since been raised not just in the name of data privacy, but in numerous other causes, including antitrust (‘fair competition’), due process (‘fair trial’), exemptions from copyright (‘fair use’), and workers’ welfare (‘fair work’). Gazing across this sea of flags, it is tempting to see fairness constituting an overarching meta-principle for governing human behaviour, particularly regarding tech usage. In this vein, Scheuer pitches fairness as providing a normative bridge between ‘law and society’ with the aim of safeguarding the ‘honesty’ of commercial practices in the digital world (Scheurer, 2023). This is a persuasive vision, although one may debate whether ‘honesty’ is the core value at stake. Fairness has numerous facets going beyond honesty. These include parity, impartiality, and respect – as elaborated further below.
A sea of conceptions and a sneaking suspicion
The multidimensionality of fairness means that the various flagbearers’ respective conceptions of its meaning are unlikely to be fully aligned (Calvi and Kotzinos, 2023). However, gauging the degree of (non-)alignment is far from easy. Across large swathes of fairness-focused discourse, the very core of the concept remains elusive.
This can give rise to a suspicion that fairness is semantically vapid, at least from a legal dogmatic perspective. The suspicion is buttressed when ‘fairness’ is listed with terms for which it could be arguably a proxy or synonym. A case in point is the so-called FRAND principle requiring owners of patents that are essential for technical standards to license their patents according to ‘fair, reasonable, and non-discriminatory’ terms. Another case in point is Article 5(1)(a) of the EU General Data Protection Regulation which stipulates that personal data shall be processed ‘lawfully, fairly and in a transparent manner’. Some scholars have claimed that fairness in the latter case is essentially a proxy for transparency (Wachter and Mittelstad, 2019, 581–582).
Respect and indeterminacy
In my view, while fairness is hard to pin down, it is not simply a proxy for transparency; neither is it a semantic nullity. At its core, fairness calls for a need to take all relevant interests into due account such that no one interest rides roughshod over others. As such, it expresses a principle of respect. In this way, fairness also speaks directly to the distribution and exercise of power. This gives it serious normative work to do that should extend beyond mere ‘window dressing’ in law and policy instruments.
Nonetheless, the indeterminacy of its operationalisation makes it incurably open-textured. Fairness belongs to a family of what the legal philosopher, Julius Stone, termed ‘indeterminate references’ (Stone, 1968). These include concepts such as ‘reasonableness’, ‘appropriateness’ and ‘justice’ – some of the siblings of fairness. They constitute legal standards that, when operationalised, require more than mechanical application of logic rooted in fact: the ‘judgment cannot turn on logical formulations and deductions, but must include a decision as to what justice requires in the context of the instant case … [Such standards] are predicated on fact-value complexes, not on mere facts’ (Stone, 1968, 263–264). This gives the standards an ineluctable degree of indeterminacy in their application.
The moral high ground and logomachy
Another common property of such standards is that they are intrinsically positive from an ethical point of view. Thus, fairness occupies the moral high ground. This brings with it a presumption of normative legitimacy – a ‘halo effect’ in Koivisto’s terms (Koivisto, 2022, 5).
At the same time, the sea of flags emblazoned with ‘fairness’ evidences an immense degree of empirical legitimacy. There can be little doubt that fairness is a crowd-pleaser. This makes it particularly attractive as a rallying cry. And in most wars of words, fairness packs a powerful punch.
Magic concepts
As such, fairness can be added to the list of ‘magic concepts’ drawn up by Pollit and Hupe in their analysis of popular buzzwords employed in management discourse (Pollit & Hupe, 2011). Pollit and Hupe characterise magic concepts as sharing four key features: (i) broad applicability and high valency; (ii) ‘overwhelmingly positive connotation’; (iii) ability to paper over ‘conflicting interests and logics’; and (iv) common invocation by stakeholders as ‘solutions’.
Fairness shares all these features, although they do not fully explain the ‘magic’ it exercises, particularly its ability to defuse tension. Three overlapping characteristics of its semantics are decisive for that ability.
Unifying ideal
The first characteristic is that fairness is sufficiently nebulous to mask, at least partially, political and ideological divisions. This is partly a result of its diffuseness. Thus, it may function as a unifying ideal that is especially useful to fall back upon in political tussles. Legislative references to fairness may also lessen tension in the ‘regulatory conversation’ (Black, 2002) between regulators and regulatees. This helps explain why its substance and contours are often not elaborated in detail by legislators: the latter have little interest in reducing its fluffiness such as to undermine its bridging function as a facilitator of (rough) consensus.
Mutuality
The second is that fairness connotes, as indicated above, respect for, and due deference to, various interests and stakeholders. A closely related connotation is mutuality. This comes through strongly in the conception of ‘fair(ness)’ in the ‘fair information practices’ set forth in the aforementioned DHEW report, which states that ‘personal privacy, as it relates to personal data-record keeping must be understood in terms of a concept of mutuality’ (DHEW, 1973, 40) and goes on to emphasise a need for a balanced relationship regarding what an individual loses or gains in exchange for the (semi-)coercive record-keeping practices of organisations. Obviously, this approach lessens the abrasiveness of interest collisions. Arguably, it has also helped enable a gradual long-term erosion of the bedrock of privacy – viz. the ‘frog-in-boiling-water’ problem – although that matter is beyond the scope of this op-ed. More interesting for present purposes is that the conception of fairness as mutuality in the sense championed by DHEW is now largely forgotten, at least in Europe, where fairness as a data privacy principle is pitched as predominantly if not exclusively safeguarding the privacy-related interests of the individual as data subject (European Data Protection Board, 2020, 17–18).
Plasticity
The latter observation illustrates nicely the third tension-reducing characteristic of fairness: its plasticity. Although closely related to the first two-mentioned characteristics, plasticity extends to a chameleon-like ability to adjust, relatively smoothly, the normative direction or ends that fairness serves. This ability is partly a function of its open-textured character as an indeterminate reference.
Postponement + plasticity = duplicity?
Magic usually comes at a price. As Pollit and Hupe astutely observe with magic concepts, these ‘excite discussion, but when the show is over many hard choices remain’ (Pollit and Hupe, 2011, 654). This observation rings true also for fairness and portends an important lesson as to its possible limits as a crowd pleaser.
While its breadth of scope and plasticity make it a powerfully flexible norm, its open texture is also a weakness inasmuch as it offers, on its own, little upfront prescriptive guidance to regulatees and other relevant stakeholders. And while fairness may be helpful for kicking the proverbial can down the road – a fortiori when the can contains a congeries of thorny issues over which agreement is challenging to reach – its overuse as a tool of postponement risks giving rise to cynicism over its legitimising function. This may in turn cast its plasticity in a negative light and nurture an attitude that, like many other crowd-pleasers, fairness is ‘all show and no substance’.
These considerations might go some way towards explaining the eschewal of reference to fairness in the operative provisions of the EU’s Artificial Intelligence Act. The omission is surprising given that one of the Act’s policy foundations – the Ethics Guidelines for Trustworthy Artificial Intelligence published by the High-Level Expert Group on AI (2019) – posits a ‘principle of fairness’ as one of four basic norms for AI governance. It is also surprising given the multitude of other fairness flags waving in the AI policy space. In effect, the EU legislator has chosen to employ alternative formulations in the Act’s operative provisions, such as respect for ‘fundamental rights and freedoms’, to absorb the normative role that fairness could otherwise play in regulating AI. The exact reason for this strategy remains unclear, although this op-ed provides some grounds for conjecture.






