The EU AI Act and the violent logics of border AI
Introduction
On November 19, 2025, the European Commission presented its Digital Omnibus package aimed at simplifying the European Union’s (EU) digital regulatory corpus, including the first-of-its-kind Artificial Intelligence Act (“AI Act”) adopted in March 2024. The Digital Omnibus reviews all EU digital legislation, which – in addition to the AI Act – includes the Data Act, the General Data Protection Regulation (GDPR), the ePrivacy Directive, and the cybersecurity policies, with the stated goal of simplifying the regulatory requirements and lowering the compliance burden on businesses. This simplification, pushed by the European tech sector in the name of innovation, has been characterised by civil society as “a euphemism for deregulation”, marking the “biggest rollback of digital rights in EU history”. This concern echoes previous warnings related to the provisions of the AI Act related to the use of AI systems in migration and border-control contexts.
Initially, the AI Act set out to establish a set of risk-based rules to support the safe development and deployment of AI systems while protecting fundamental rights. However, even though the EU identified AI systems of border control (henceforth referred to as border AI) as a priority, civil society advocates were quick to note that a “parallel legal framework” was created for border-control actors, sounding the alarm about potential harms to migrants.
Existing analyses at the intersection of technology governance and migration policy have examined the failures of the AI Act, scrutinising the risks assigned to discrete border AI systems and assessing the appropriateness of the obligations that system developers and deployers must abide by accordingly. They pointed to the fact that, for instance, even when AI systems are prohibited by the AI Act – as is the case with emotion recognition systems – those bans do not apply to migration authorities.
But beyond these particular rules and exceptions, which are tied to specific tools, it is worth looking at the three foundational logics that underpin AI systems as a whole.
The extractive logic of border AI
First, AI systems necessitate data. As such, border AI systems like the ones deployed at the EU’s external borders rely on massive data collection apparatuses necessary to render the bodies of migrants legible to it and to its member states. Indeed, migrants are dispossessed of large quantities of data on exploitative terms, as opting out or seeking appeal is nearly impossible. This continuous project of legibility expands, as surveillance techniques become more sophisticated and bordering sites more geographically diffused. With more data capture touchpoints, more migrants become legible to border-control authorities, allowing for their categorisation into those who are eligible to access asylum and formal migration pathways, and those who are not.
When it comes to the extractive logic of AI, the AI Act regulates its use in publicly accessible spaces to limit surveillant data collection from civilians. Yet, the AI Act explicitly states that publicly accessible spaces do not include border zones. This exception permits the unregulated use of AI to capture data from migrants. For example, AI systems that identify individuals from a distance using their biometrics – such as facial features, voice, or gait – can capture and store the data of migrants, “without evidence, suspicion, or legal justification”.
The predictive logic of border AI
Second, a logic of prediction is quintessential to AI systems whose core function is to forecast outcomes based on data and pattern recognition. The algorithmic processing of large amounts of data serves to anticipate and speculate on future activity, as epitomised in border AI systems that aim to forecast migration flows – or even potentially foretell “people’s intentions to migrate”, as well as those that purport to predict the risk to security of individuals. This logic of prediction broadens the temporality of border violence, as it could allow for the allocation of patrolling and deterrence resources not only along routes they have already undertaken, but also before they begin their migration journey or in anticipation of their future routes of migrants, thus justifying the punitive practices aimed at their interception and interdiction on the basis that they will supposedly pose future threats.
When it comes to the predictive logic of AI, the AI Act de-risks predictive analytics systems used for migration forecasting and the individual risk assessment and profiling of migrants. It is these systems that facilitate the implementation of (often violent) pushback practices aimed at preventing and halting the movement of migrants before they arrive at the EU’s borders.
The experimental logic of border AI
Third, the rapid research and development cycles of AI technologies require constant testing and iteration. In line with the EU’s decades-long history of treating former European colonies as a testing ground, activists warn that the Southern EU border has been turned into a laboratory for border AI, at the expense of migrant and refugee communities. A lack of regulation of border technologies permits the unfettered testing of “experimental and high-risk technologies” on migrants, in a manner that would not be permitted on citizens. For instance, new border AI projects are implemented by EU authorities “only to discover that the systems do not perform as expected”, when the harms to migrants are virtually irreversible.
When it comes to the experimental logic of AI, the AI Act regulates the real-world testing of high-risk systems by requiring that the providers or prospective providers of those systems adhere to several conditions, in order to ensure that testing does not have negative effects on individuals involved in those tests. One such condition stipulates that all instances of real-world testing must be registered in an EU-wide database for transparency purposes. However, border-control authorities are not subject to those requirements, allowing them to register in a non-public section of the EU database and to operate in secrecy and with impunity, making it difficult for migrants and their advocates to seek redress or appeal.
Conclusion
These three foundational logics of AI facilitate border violence through unconsented data dispossession, punitive pushbacks, and experimental harm. In other words, border AI mediates the unfettered surveillance of migrants, the anticipatory allocation of resources aimed at interrupting their mobility and the restriction of their avenues for advocacy and protection from unregulated experimentation.
These harms should, in theory, warrant a more careful and stringent approach to regulating border AI and safeguard the fundamental rights of migrants. Instead, the AI Act neglects to address the foundational violence of border AI systems, underpinned by their extractive, predictive, and experimental logics. As advocates foresee that the recent Omnibus package will strip down the “already inadequate protections” of the AI Act, it is worth remembering those who were not safeguarded by the EU AI Act in the first place: migrants. To protect the human and digital rights of migrants, consideration should be paid, at minimum, to:
- Put in place adequate protections from overreaching surveillant capture of data from migrants, including through biometrics. Ensure migrants’ rights to opt out of data collection and storage where appropriate;
- Establish a more prudent and higher-risk classification of predictive analytics and other systems that aid in the dangerous pushbacks of migrants;
- Avoid exempting migration authorities from abiding by the transparency requirements and other protections meant to safeguard migrants, including from the unchecked testing of AI systems. Guarantee migrants’ rights to redress and remedy.






