If transparency is the solution, are we really addressing the problems of automated decision-making?
Acknowledgements
This op-ed arose from the workshop on Automation in Legal Decision-Making: Regulation, Technology, Fairness organised on 14 November 2024 at the University of Copenhagen under the aegis of the LEGALESE project. The workshop aimed at inspiring critical thinking about the growing role of automation in legal decision-making and its implications for fairness and explainability in legal decision-making. Participants from multiple disciplines were invited to contribute and discuss blog posts reflecting on a range of pressing themes, including the role of human oversight, the challenges of explainability, and the importance of interdisciplinary dialogue for effective and responsible innovation. By engaging with both cautionary tales and constructive approaches, the event fostered a space for discussion on how legal, technical, and societal perspectives can come together to shape ‘fair’ ADM practices.
Funding: The author received funding from the Innovation Fund Denmark, grant no 0175-00011A.
Introduction
The integration of artificial intelligence (AI) in general, and automated decision-making (ADM) in particular, into public governance, has been keenly pursued by multiple governments and affiliated public bodies. ADM’s promise is highly appealing due to its touted ability to improve efficiency by reducing processing times for administrative tasks and replacing human decision makers in routine processes. Theoretically, ADM should reduce the risk of human error or bias (Murray, 2024) and enable employees to spend their time on more complex tasks (Nordesjö, Ulmestig, & Scaramuzzino, 2023). In the UK, an automated decision tool for the triaging process for detecting, and responding to, sham marriages considerably reduced staff numbers and initial processing times. This results in financial savings for the public sector, a welcome feature at a time of budget pressures. Finally, many perceive ADM systems as enabling greater consistency and objectivity to ensure equality in the day-to-day workings of the public sector through the uniform application of standardised criteria across decisions (Ranerup & Henriksen, 2020).
Despite these benefits, the operation of ADM within the public sector has not been without controversy, proving to be costly and sometimes enabling further discrimination (Bouwmeester, 2025). Transparency has emerged as a dominant response to the challenges posed by ADM and AI in general (GPAI, 2024). The underlying assumption is that the “problem” stems from information deficits, prompting a “solution” of technical tools to create “explainable AI.. This framing fails to appropriately interrogate how far ADM’s implied unfamiliarity is the fundamental issue (Buçinca, Malaya, & Gajos, 2021; Chromik et al., 2021). Responding to unfamiliarity as the sole or, at least, main concern obscures the relevance of deeper questions over power and democratic oversight in increasingly automated governance systems. A focus on the explainability of outputs risks neglecting crucial questions about system inputs, data sources, and relevant actors. So how does the principle of transparency respond to the challenges of ADM and “black box” algorithms more generally? How could transparency be better utilised to safeguard against some of the risks arising from ADM use in the public sector?
Tracing transparency
Transparency in ADM systems often focuses on providing explanations of individual decisions, highlighting relevant factors or comparing the individual to the relevant population. However, this fails to address the deeper issues concerning ADM’s implementation.
As governments face growing demands upon public services and increasing financial pressures, they prioritise cost-effectiveness, sparking a concerted push towards “innovation” within the public sector. This often entails partnerships with industry and the incorporation of commercially developed technologies (Su, 2022). However, the commercial industry does not just bring technical expertise but infuses the public sector with profit-driven objectives. These rarely mesh well with core public values such as equity, accountability, and transparency (Waldman, 2019). This, coupled with a turn towards evidence-based policymaking, where evidence only counts where it is countable, further distorts the landscape that transparency attempts to make explainable. A focus on documentation and process could erroneously elevate symbolic compliance to evidence of legal compliance that further obscures underlying substantive values of fairness and dignity (Waldman, 2019).
In all, a focus on transparency to produce explainable AI does not render visible relevant aspects of ADM and prompts fundamental shifts in the relationship between public and private actors, to citizens’ detriment.
Transparency’s tricky relationship with accountability
For those that view transparency instrumentally, accountability is often the goal. The logic is that transparency helps uncover errors and, consequently, who is ultimately responsible. However, fundamental characteristics of ADM systems undermine the link between transparency and accountability.
Many ADM systems are products of complex public-private partnerships, in countries such as Italy, the UK, Germany, and Poland, that potentially creates a “many hands” problem for accountability (Cobbe, Veale, & Singh, 2023; Langer, 2024). Decision-making opportunities are scattered throughout multiple organisations with differing levels of involvement, undermining the possibility of identifying a meaningful accountability target. The private sector’s involvement introduces concepts of intellectual property, trade secrecy, and commercial secrets that further erode transparency’s ability to facilitate accountability (Lechterman 2021) because of their ability to shield products from open scrutiny.
Transparency as a means of providing explanations of individual decisions may fall into the “ripple effect trap”, failing to acknowledge its impact upon the broader societal context in which it operates (Andrus et al., 2021). For example, explaining why someone was denied benefits may not reveal systemic biases in the data or the influence of private vendors. However, in the public sector, this narrow view risks overlooking how ADM reshapes accountability, fairness, and public trust at a structural level. Therefore, individual explanations will not de-mystify the broader ramifications of ADM’s introduction upon existing social structures and nor can the mathematical formalism that underpins ADM accurately capture contextual, often contested, social concepts like fairness (Selbst et al., 2018).
This background undermines claims that transparency can ensure meaningful accountability and can respond to potential issues. As a result, current approaches to transparency may not appropriately foster accountability nor address the full spectrum of potential negative outcomes.
Transparency: to what end?
Where transparency is primarily used to identify individuals responsible for errors in individual decisions, it also risks overlooking harms such systems may cause to collective and societal interests (Smuha 2021).
The GDPR seeks to protect fundamental rights but the limited availability of certain rights undermines this protection. For example, the right not to be subject to automated individual decision making, which could assist in enhancing transparency in such decision-making, is only applicable to decisions “based solely on automated processing.” Furthermore, the right is only available where such decisions produce “legal effects concerning him or her or similarly significantly affect[s] him or her”. These two criteria thus constrain the right’s practical utility in enhancing the transparency of such decisions (Davis & Schwemer, 2023; Tosoni, 2021). Recent case law has affirmed the right’s existence after much academic debate on the topic (Wachter, Mittelstadt, & Floridi, 2017). However, it is unclear if this decision is sufficient to bridge the current gap between theoretical protections and real-world enforcement.
Other rights-based protections face further challenges. Human rights usually cast individuals as empowered rights holders, yet the reality often reveals overwhelmed and disempowered subjects, particularly in the context of complex AI-based decisions (Breuer, Heyman, & van Brakel, 2022). Many remain unaware of their rights let alone their infringement (Rughiniș et al., 2021). Traditional human rights mechanisms have long been criticised for their weak enforcement capabilities and inability to effect systemic change (Donoho, 2006).
Broadening horizons for future work
A focus upon the context in which ADM systems operate, and its interactions with citizens, is required to effect systemic change to uphold public values. Alongside technical tools, greater efforts must be made to incorporate citizens into the integration of ADM systems into the public sector, coupled with meaningful public monitoring and oversight mechanisms (Denk, Hedström, & Karlsson, 2022; European Law Institute, 2021; Digital Future Society, 2022). The aim is not to totally dismiss the value of transparency or technical tools, but neither explainability nor accountability, when narrowly interpreted through the lens of transparency, adequately address the structural, societal, and democratic challenges posed by ADM. For example, providing an explanation for why a benefits application was denied by an algorithm may satisfy formal transparency, but it does little to illuminate the opaque data sources, commercial interests, or systemic biases embedded in the system’s design as factors that may remain shielded from public scrutiny and meaningful oversight.






