Following the roadmap: unmasking the EU’s security AI plans

In 2020, a lengthy European Commission-funded study examined “how AI can be leveraged in the context of Border Control, Migration and Security.” The study included a “roadmap” that set out nine different projects to integrate AI in these sensitive policy areas. They include using AI for assessing asylum applications, tracking individuals’ compliance with immigration legislation (for example, by using AI to analyse welfare, tax and social security records), and automating border surveillance.

Despite the potential impact of these plans on fundamental rights, many are being carried forward behind closed doors by officials from EU and national institutions and agencies: for example, through the Working Group on Artificial Intelligence hosted by the EU’s database agency, eu-Lisa; and via Europol’s Innovation Hub for Internal Security.

The eu-Lisa working group aims to “operationalise the research available on AI, putting it to the operational context, making it useful for border guards, law enforcement and migration officers.” While some of the projects set out in the roadmap would likely require legislative changes to be put into use, others would not – and in either case, it is vital that critical engagement with these plans takes place sooner rather than later.

In this context, the goal of this project is to increase democratic scrutiny and oversight of the EU’s ongoing security AI plans. We will do this through an investigation of the ‘state of play’ of the security AI roadmap and the projects it encompasses, as well as other related ongoing work (for example, that carried out under the auspices of Europol’s Innovation Hub).

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error