01 March 2022
45 organisations, including Statewatch, are calling on EU decision-makers to prohibit the use of predictive and profiling "artificial intelligence" (AI) systems in the realm of law enforcement and criminal justice, a move that will "ensure full fundamental rights protection for people affected by AI systems, and in particular... prevent the use of AI to exacerbate structural power imbalances."
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
The statement was coordinated by Fair Trials and EDRi. Full-text below.
The European Union institutions are taking the significant step to regulate artificial intelligence (AI) systems, including in the area of law enforcement and criminal justice, within the proposed Artificial Intelligence Act (AIA).
This is a unique opportunity to ensure full fundamental rights protection for people affected by AI systems, and in particular, to prevent the use of AI to exacerbate structural power imbalances. AI systems in law enforcement, particularly the use of predictive and profiling AI systems, disproportionately target the most marginalised in society, infringe on liberty and fair trial rights, and reinforce structural discrimination.
We, the undersigned organisations, call on the Council of the European Union, the European Parliament, and all EU member state governments to prohibit AI predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act. (AIA).
This statement details the harmful impact of predictive, profiling and ‘risk’ assessment systems in law enforcement and criminal justice and makes the case for amendments to the EU’s AIA.
What are predictive and profiling AI systems in law enforcement and criminal justice?
Artificial intelligence (AI) systems are increasingly used by European law enforcement and criminal justice authorities to profile people and areas, predict supposed future criminal behaviour or occurrence of crime, and assess the alleged ‘risk’ of offending or criminality in the future.
These predictions, profiles, and risk assessments, conducted against individuals, groups and areas or locations, can influence, inform, or result in policing and criminal justice outcomes, including surveillance, stop and search, fines, questioning, and other forms of police control. They can lead to arrest, detention, prosecution, and are used in sentencing, and probation. They can also lead to civil punishments, such as the denial of welfare or other essential services, and increased surveillance from state agencies. Policing and criminal justice authorities across Europe are using these AI systems to influence, inform, or assist in criminal justice decisions and outcomes.
The fundamental rights harms of predictive, profiling and risk assessment AI systems in criminal justice
Discrimination, surveillance and over-policing
These AI systems reproduce and reinforce discrimination on grounds including but not limited to: racial and ethnic origin, socio-economic status, disability, migration status and nationality, as well as engage and infringe fundamental rights, including the right to a fair trial and the presumption of innocence, the right to private and family life, and data protection rights.
The law enforcement and criminal justice data used to create, train and operate AI systems is often reflective of historical, systemic, institutional and societal discrimination which result in racialised people, communities and geographic areas being over-policed and disproportionately surveilled, questioned, detained and imprisoned across Europe.
These discriminatory practices are so fundamental and ingrained that all such systems will reinforce such outcomes. This is an unacceptable risk.
The right to liberty and the right to a fair trial and the presumption of innocence
Predictive, profiling and risk assessment AI systems target individuals, groups and locations, and profile them as criminal, resulting in serious criminal justice and civil outcomes and punishments, including deprivations of liberty, before they have carried out the alleged act for which they are being profiled.
By their nature, these systems therefore undermine the fundamental right to be presumed innocent, shifting criminal justice attention away from criminal behaviour towards vague and discriminatory notions of risk and suspicion. The outputs of these systems are therefore not reliable evidence of actual or prospective criminal activity and should never be used as justification for any law enforcement action, such as an arrest, let alone be submitted in criminal proceedings.
Further, such systems facilitate the transfer of substantive decisions affecting peoples’ lives (criminal justice, child protection) from the judicial to the administrative realm, with serious consequences for fair trial, liberty and other procedural rights.
Transparency, accountability and the right to an effective remedy
AI systems that are used to influence, inform and assist law enforcement and criminal justice decisions through predictions, profiles and risk assessments often have technological (black boxes, neural networks) or commercial barriers (intellectual property, proprietary technology) that prevent effective and meaningful scrutiny, transparency, and accountability. It is crucial that individuals affected by these systems’ decisions are aware of their use.
To ensure that the prohibition is meaningfully enforced, as well as in relation to other uses of AI systems which do not fall within the scope of this prohibition, affected individuals must also have clear and effective routes to challenge the use of these systems via criminal procedure, to enable those whose liberty or right to a fair trial is at stake to seek immediate and effective redress.
Prohibit Predictive and Profiling AI Systems
The undersigned organisations urge the Council of the European Union, the European Parliament, and all EU member state governments to prohibit AI predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act (AIA).
Such systems amount to an unacceptable risk and therefore must be included as a ‘prohibited AI practice’ in Article 5 of the AIA. Ongoing negotiations on the AIA must be informed by a full consideration of the fundamental rights and societal harms associated with predictive systems in policing and criminal justice, and the fundamental rights of individuals, groups, as well as the consequences for democratic society, must be prioritised.
For further information, including examples and case studies and further analysis, please see:
Frontex to boost border control efforts in Niger, Algeria and Libya
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.