UK: Decisions about peoples' lives "made solely by machines" must be banned

Topic
Country/Region
UK

Changes to data protection law proposed by the UK government threaten to eliminate protections for individuals against automated decision-making. An open letter signed by almost 20 organisations, including Statewatch, calls on the government to ensure that this does not happen. "The government should extend AI accountability, rather than reduce it, at this critical moment," says the letter.

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.


Image: Daniel, CC BY-NC 2.0


The letter was coordinated by Open Rights Group.

Rt Hon. Peter Kyle MP
Secretary of State for Science, Innovation and Technology Department for Science Innovation and Technology

6 December 2024

Dear Minister,

We are a group of trades unionists, academics, CEOs, NGOs and campaign organisations that work in fields impacted by automated decisions, such as Artificial Intelligence and profiling technologies; which is to say, nearly every area of our lives.

We work on data policy, and digital, policing, children, racial justice, employment, health, and disability rights. We have academic expertise in the areas of AI ethics, Computing, Law and Human Rights, run tech companies, and represent workers with expertise in technology.
We recognise that there are benefits to be gained from Artificial Intelligence. We also see that there are risks. Where we hope both we and the government can find consensus is that Artificial Intelligence technologies, in order to succeed, need to maintain the confidence of the public. The public must be able to trust the way that these technologies are employed.

Yet there are concerns. Data can be biased. Models can be wrong. The potential for discrimination and for deepening inequalities is known and significant. Important machine decisions can be wrong and unjust, and frequently Artificial Intelligence providers are unwilling or unable to address shortcomings.

The core of the necessary public trust relies on accountability. Decisions, whether made by humans or machines, are sometimes incorrect.

To this end, we are worried by the potential for measures in the Data (Use and Access) Bill to change the way that “Automated Decision Making” is governed under Article 22 UK GDPR. We respectfully ask that these clauses be reexamined to ensure that people are not simply subjected to life changing decisions made solely by machines.

The debacle over A-level results when they were algorithmically assigned in 2020 is an example of the kind of automated decision which could be challenged by an individual under the current legal framework, but would not be open to challenge under the changes proposed. Hire and fire decisions by Uber have been successfully challenged under these rights in the Netherlands. [1]

We respectfully ask that these clauses be reexamined to ensure that people are not simply subjected to life changing decisions made solely by machines, and forced to prove their innocence when machines get it wrong. The government should extend AI accountability, rather than reduce it, at this critical moment.

[1] Workers Info Exchange, Historic digital rights win for WIE and the ADCU over Uber and Ola at Amsterdam Court of Appeal

Signatories

Organisations

5Rights

Amnesty International UK

Big Brother Watch

Bristol Copwatch

Child Rights International Network (CRIN)

Connected by Data

Data, Tech & Black Communities CIC

Defend Digital Me

Foxglove

Keep our NHS Public

Medact

Open Rights Group

Public Law Project

Privacy International

Scottish Law and Innovation Network

Statewatch

Stopwatch

The Traveler Movement

Workers Info Exchange

Individuals

Wes Auden, Trades unionist, health sector

Professor Subhajit Basu FRSA, Chair in Law and Technology, Co-Director Post Graduate Research Studies, University of Leeds

Tim Bannister, IT Consultant

Paul Bernal, Professor of Information Technology Law University of East Anglia

Dr Claire Bessant, Associate Professor, Northumbria Law School

Joanna J Bryson, Professor of Ethics and Technology, Hertie School

John Chadfield, National Officer for Technology, Communication Workers Union

Dr Benjamin Clubbs Coldron, Post-doctoral researcher, University of Strathclyde

Dr Aysem Diker Vanberg, Senior Lecturer in Law, Goldsmiths, University of London

Stef Elliott, Six Serving Men

Dr Maria Farrell, writer and speaker on tech policy

Wendy Grossman, technology journalist

Dr Edina Harbinja, Reader in Media and Privacy Law, Aston University

Dr Jeremy Harmer, Independent privacy researcher

Dr. Adam Harkens, Lecturer in Public Law, University of Strathclyde

Colin Hayhurst, CEO at Mojeek

Ben Hawes, Technology Policy Consultant

Douwe Korff, Emeritus Professor of Insternational Law, London Metropolitan University

Jade Kouletakis, Lecturer in Intellectual Property Law, Abertay University

Dr Wenlong Li, Lecturer in Law, Aston Law School

Derek McAuley, Emeritus Professor of Digital Economy, University of Nottingham

Professor Annelize McKay, Professor of International Law & Bioethics, Dundee Business School, Faculty of Design, Informatics and Business, Abertay University

Miranda Mowbray, Honorary Lecturer in Computer Science, University of Bristol

Andrew Murray, Professor of Law and Technology and Director of the Law, Technology and Society Group, The LSE Law School, London School of Economics

Guido Noto La Diega, University of Strathclyde, Professor of Law, Technology and Innovation; Co-Convenor of the Socially Progressive AI

Ralph T O'Brien, REINBO Consulting Ltd, Institute of Operational Privacy by Design, UK Data Protection forum

Andy Phippen, Professor of Digital Rights, Bournemouth University

Dr Tom Redshaw, Lecturer in Digital Society, University of Salford. Dr Birgit Schippers, Lecturer in Law, University of Strathclyde

Adam Leon Smith, Chair, BCS Fellows Technical Advisory Group

Professor Tom Stoneham, Ethics Lead for the UKRI Centre for Doctoral Training in Safe AI Systems, University of York

Dr. Dan McQuillan, Lecturer in Creative & Social Computing, Goldsmiths, University of London

Dr Ben Williamson, Senior Lecturer, Centre for Research in Digital Education University of Edinburgh

Richard Wingfield. Director, Technology and Human Rights BSR

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

Further reading

09 August 2024

UK: Racist violence does not justify proposed expansion of police surveillance technology

Following the racist pogroms that broke out across England at the end of July and beginning of August, the prime minister, Keir Starmer, announced a range of new policing measures - including a proposal for "wider deployment of facial recognition technology." A letter signed by more than two dozen organisations, including Statewatch, says that an expansion of live facial recognition "would make our country an outlier in the democratic world" and calls for the plan to be dropped.

26 July 2024

UK: Call for "serious, meaningful protection" from police facial recognition technology

The UK's new Labour government must ensure "proper regulation of biometric surveillance in the UK," says a letter signed by nine human rights, racial justice and civil liberties groups, including Statewatch. "No laws in the UK mention facial recognition, and the use of this technology has never even been debated by MPs," the letter highlights. It calls on the new home secretary, Yvette Cooper, and the science, technology and innovation minister, Peter Kyle, to meet the signatory groups "to discuss the need to take action and learn from our European partners in regulating the use of biometric surveillance in the UK more broadly." A separate letter to Scotland's cabinet secretary for justice and home affairs raises similar points, and calls on the Scottish government "to stop the proposed use of live facial recognition surveillance by Police Scotland."

26 July 2024

UK artificial intelligence rules must protect rights, prevent worsening of structural power imbalances

A letter from the #SafetyNotSurveillance coalition, of which Statewatch is a member, calls on the new Labour government to "protect people's rights and prevent uses of AI which exacerbate structural power imbalances." The government has announced that it will establishment legislation on AI, and the letter calls for that law to prohibit predictive policing and biometric surveillance, and to ensure sufficient safeguards, transparency and accountability for all other uses of AI technologies.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error