26 July 2024
A letter from the #SafetyNotSurveillance coalition, of which Statewatch is a member, calls on the new Labour government to "protect people's rights and prevent uses of AI which exacerbate structural power imbalances." The government has announced that it will establishment legislation on AI, and the letter calls for that law to prohibit predictive policing and biometric surveillance, and to ensure sufficient safeguards, transparency and accountability for all other uses of AI technologies.
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
The letter was coordinated by Open Rights Group.
Rt Hon Yvette Cooper MP,
Secretary of State for the Home Department
2 Marsham Street
London SW1P 4DF
26 July 2024
Dear Home Secretary,
We write to you as the #SafetyNotSurveillance coalition - a group of organisations working at the intersections of human rights, racial justice and technology.
In the King’s Speech, it was announced that the Government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. A priority within this must be to protect people’s rights and to safeguard against the harms of AI systems in policing.
AI is rapidly expanding into all areas of public life, but carries particularly high risks to people’s rights, safety and liberty in policing contexts.
Many AI systems have been proven to magnify discrimination and inequality. In particular, so-called ‘predictive policing’ and biometric surveillance systems are disproportionately used to target marginalised groups including racialised, working class and migrant communities. These systems criminalise people and infringe human rights, including the fundamental right to be presumed innocent.
As signatories, we include individuals with lived experience of the harms of Artificial Intelligence (AI) systems in policing. As such, we, the undersigned, call on the government to protect people’s rights and prevent uses of AI which exacerbate structural power imbalances. AI systems in policing should be regulated by:
1. PROHIBITING ‘PREDICTIVE POLICING’ & BIOMETRIC SURVEILLANCE SYSTEMS
a) ‘Predictive policing’ systems which use AI, data and algorithms to identify, profile and target individuals, groups and locations, attempting to 'predict' certain criminal acts, or the 'risk' of certain criminalised acts should never be used, and should be prohibited.
These data-based 'predictions', profiles and 'risk' assessments influence, inform, or otherwise lead to policing and criminal justice outcomes, including surveillance, questioning, stop and search, fines, and even arrest. Automated ‘predictions’, profiles and ‘risk’ assessments can also lead to civil punishments, including the denial of welfare, housing, or other essential services, as well as harmful outcomes through immigration enforcement and increased surveillance from state agencies.
‘Predictive policing’ systems exacerbate structural imbalances of power. They are used to monitor and control people in public spaces, with the worst harms often falling on racialised and migrant communities, who are labelled as ‘threats’ to the state.
These systems have been proven to reproduce and reinforce discrimination and inequality, along the lines of, but not limited to: racial and ethnic origin, nationality, socio-economic status, disability, gender and migration status. Data reflecting existing inequalities and prejudices is used to recreate and reinforce these inequalities. These systems criminalise people and engage and infringe human rights, including the right to a fair trial and the presumption of innocence, the right to private and family life, and data protection rights.
These systems must therefore be prohibited.
2. ENSURING SAFEGUARDS, TRANSPARENCY & ACCOUNTABILITY FOR ALL OTHER USES
a) All data-based, automated and AI systems in policing which are not prohibited should be regulated to protect people’s rights and safeguard against
All systems should be independently classified as 1. Prohibited uses, and 2. Non-prohibited uses, subject to strict transparency and accountability obligations.
b) A legislative framework creating transparency, accountability, accessibility and redress should underpin the use of all data-based, automated and AI systems in policing.
All systems which influence, inform or impact policing decisions should be subject to strict transparency and accountability obligations including:
Signed:
Open Rights Group
CAGE
Liberty
No Tech for Tyrants
Northern Police Monitoring Project
StopWatch
Bristol Copwatch
Racial Justice Network
Big Brother Watch
Runnymede Trust
Healing Justice LDN
Statewatch
UNJUST CIC
Prevent Watch
Community Policy Forum
The 4Front Project
Netpol (Network for Police Monitoring)
The UK's new Labour government must ensure "proper regulation of biometric surveillance in the UK," says a letter signed by nine human rights, racial justice and civil liberties groups, including Statewatch. "No laws in the UK mention facial recognition, and the use of this technology has never even been debated by MPs," the letter highlights. It calls on the new home secretary, Yvette Cooper, and the science, technology and innovation minister, Peter Kyle, to meet the signatory groups "to discuss the need to take action and learn from our European partners in regulating the use of biometric surveillance in the UK more broadly." A separate letter to Scotland's cabinet secretary for justice and home affairs raises similar points, and calls on the Scottish government "to stop the proposed use of live facial recognition surveillance by Police Scotland."
An office for West Yorkshire Police, based in Leeds, has been convicted of breaches of the Computer Misuse Act 1990, after using police databases to search for information on people she knew with no legitimate reason. The case highlights the risks posed by forthcoming changes to UK data protection law.
Secret "trilogue" negotiations on the EU's proposed Artificial Intelligence Act are ongoing, and next week MEPs and EU member state representatives will start discussing bans and prohibitions. The week after, decisions are expected on whether to classify the use of AI for migration and security purposes as "high risk" or not. A statement directed at decision-makers and signed by 115 associations and individuals, including Statewatch, calls for strict limits and controls in the AI Act "to prevent harm, protect people from rights violations and provide legal boundaries for authorities to use AI within the confines of the rule of law."
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.