09 May 2022
As EU institutions work to amend the proposed Artificial Intelligence Act (AI Act), exploring and understanding the impact of AI systems on marginalised communities is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways. How can the AI Act prevent this?
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
A version of this blog was originally published by European Digital Rights.
Join us on Monday 16 May at an online event to discuss the vital changes that must be made to the proposed AI Act in order to uphold the fundamental rights of migrants and refugees.
More information and registration details here.
From AI lie-detectors, AI risk profiling systems used to assess likely good of ‘illegal’ movement, to the rapidly expanding tech-surveillance complex at Europe’s borders, AI systems are increasingly a feature of migration management in the EU.
On the ‘sharp-edge’ of innovation
Whilst the uptake of AI is promoted as a beneficial policy goal by EU institutions, for marginalised communities, and in particular for migrants and people on the move, AI technologies fit into a wider system of over-surveillance, discrimination and violence. As highlighted by Petra Molnar in the report 'Technological Testing Grounds', AI systems are increasingly controlling migration and affecting millions of people on the move. ‘Innovation’ increasing means a ‘human laboratory’ of tech experiments. People in already dangerous, vulnerable situations are the subjects.
How do these systems affect people?
In migration management, AI is used to make predictions, assessments and evaluations about people in the context of their migration claims. Of particular concern is the use of AI to assess whether people on the move present a ‘risk’ of illegal activity or security threats. AI systems in this space are inherently discriminatory, pre-judging people on the basis of factors outside of their control. Along with AI lie detectors, polygraphs and emotion recognition, we see how AI is being used and developed within a broader framework of racialised suspicion against migrants.
Not only can AI systems present these severe harms to people on the move in individual ways, they form part of a broader surveillance eco-system increasingly developed at and within Europe’s borders. Increasingly, racialised people and migrants are over-surveilled, targeted, detained and criminalised through EU and national policies. Technological systems form part of those infrastructures of control, as has been documented in numerous Statewatch reports such as 'NeoConOpticon', 'Market Forces', 'Data Protection, Immigration Enforcement and Fundamental Rights' and 'Building the biometric state'.
Specifically, many AI systems are being tested and deployed on a structural level to shape the way governments and institutions respond to migration. This includes AI for generalised surveillance at the border, including "heterogenous robot systems" at coastal areas, and predictive analytic systems to forecast migration trends. There is a significant concern that predictive analytics will be used to facilitate push-backs, pull-backs and other ways to prevent people exercising their right to seek asylum. This concern is especially valid in a climate of ever-increasing criminalisation of migration, and also of human rights defenders helping migrants. Whilst these systems don’t always make decisions directly about people, they vastly affect the experience of borders and the migration process, shifting even further toward surveillance, control, and violence throughout the journey.
Regulating migration technology: what has happened so far?
In April 2021, the European Commission launched its legislative proposal to regulate AI in the European Union. The proposal, whilst categorising some uses of AI in migration control as ‘high-risk’, fails to address how AI systems exacerbate violence and discrimination against people on the move in migration processes and at borders.
Crucially, the proposal does not prohibit some of the sharpest and most harmful uses of AI in migration control, despite the significant power imbalance that these systems exacerbate. The proposal also includes a carve-out for AI systems that form part of large scale EU IT systems, such as the VIS and ETIAS. This is a harmful development meaning that the EU itself will largely not be scrutinised for its use of AI in the context of its migration databases.
In many ways, these minimal technical checks required of (a limited set of) high-risk systems in migration control could be seen as enabling, rather than providing meaningful safeguards for people subject to these opaque, discriminatory, surveillance systems.
The proposal does not include any reference to predictive analytic systems in the migration context, nor generalised surveillance technologies at borders, in particular those that do not make decisions about, or identify, natural persons. Therefore, systems that pose harm in the migration context in more systemic ways seem to have been completely overlooked.
In its first steps to amend the proposal, the IMCO-LIBE committee did not make any specific amendments in the migration field. There are major steps to be taken to improve this from a fundamental rights perspective.
Amendments: How can the EU AI act better protect people on the move?
Civil society have been working to develop amendments to the AI act to better protect against these harms in the migration context. As highlighted generally, EU institutions still have a long way to go to make the AI act a vehicle for genuine protection of peoples’ fundamental rights, especially for marginalised groups.
The AI act must be updated in three main ways to address AI-related harms in the migration context:
The amendment recommendations on AI and migration were developed in coalition, reflecting the broad scope of harms and disciplines this issue covers. Special thanks to Petra Molnar, Access Now, the Platform for International Cooperation for Undocumented Migrants (PICUM), Statewatch, Migration and Technology Monitor, European Disability Forum, Privacy International, Jan Tobias Muehlberg, and the European Centre for non-profit Law (ECNL).
Image: Jonathan McIntosh, CC BY 2.0
EU: Artificial Intelligence Act: latest Presidency compromise text
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.