12 June 2023
Thirty civil society organisations, including Statewatch, have published a joint statement calling for the UK government to ensure that it's approach to artificial intelligence upholds fundamental rights and democratic values.
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
Image: Tomizak, CC BY-ND 2.0
The statement was coordinated by the Public Law Project. It has been covered by the BBC here.
The Government’s approach to regulation of artificial intelligence (AI), as set out in its AI regulation white paper (pdf), misses a vital opportunity to ensure that fundamental rights and democratic values are protected.
In particular, it fails to ensure that adequate safeguards and standards are in place for use of AI by public authorities.
The use of AI in public decision-making offers the promise of greater efficiency and accuracy.
However, there is also a risk of direct or indirect discrimination, and the exacerbation of existing inequalities. Regulation is essential to ensure that AI works for the public good.
As civil society groups who represent individuals and communities impacted by government use of automation and AI across the UK, we urge the UK Government to develop and implement AI regulation at minimum in line with the following principles:
These principles will require obligations in statute, which will need to build upon and work with existing data protection safeguards and our human rights framework. Instead of promoting existing standards, the Data Protection and Digital Information (No 2) Bill is weakening them, and threats to leave the European Convention on Human Rights – and legislation which disapplies parts of the Human Rights Act – are putting our human rights framework at risk. Effective AI regulation must strengthen, rather than undermine, existing protections.
Effective regulation of how public authorities use AI must have mandatory transparency as its starting point. Individuals and communities whose lives are impacted by AI should know when they are being subject to ADM, and how those decisions are made. Without transparency, individuals and parliamentarians cannot hold decision-makers to account when AI systems produce harmful or discriminatory outcomes. Indeed without such transparency, the Government's own stated objective of increasing public trust in a regulatory framework for AI is unachievable.1
When it comes to transparency, it is not enough for compliance to be optional. Transparency requirements must be in primary legislation, rather than in guidance. Wherever an ADM tool is being used to make, or support, decisions which have a legal, or similarly significant, effect on someone, requirements should include:
Decision-making which has significant implications for people and which may affect their rights should be undertaken with a ‘human meaningfully in the loop’. More research is required to understand the impact of the use of AI on decision-making by officials, and to determine what meaningful human intervention looks like. Mere rubber-stamping of algorithmic outputs will not lead to proper protection and accountability.
There must be clear division of responsibility between those who develop, own and deploy AI tools, in order to facilitate effective protection and accountability. Those responsible for developing, testing and using AI must be subject to clear statutory obligations – for example, to ensure that at each stage, the necessary checks for and safeguards against discriminatory outcomes have been put in place.
Those who will be affected by the use of a new tool - as well as academics and civil society more broadly - should have the chance to participate in that tool’s design and deployment. Alongside robust testing, this would help identify risks and support effective design. It would also build public trust around use of ADM by government and facilitate consensus-building about the role of AI in our society.
Individuals and communities who are adversely affected by ADM must have access to quick, accessible, and effective avenues of redress. The existing patchwork of regulatory bodies lack statutory powers and financial resources. Given the specificity and complexity of this domain, an independent expert regulator is required. This regulator needs to be adequately resourced and given the right tools to enforce the regulatory regime, including powers to proactively audit public ADM tools and their operation.
As the white paper itself recognises, AI presents a serious risk to fundamental rights in a number of contexts, including the rights to privacy and non-discrimination.2 Furthermore "the patchwork of legal frameworks that currently regulate some uses of AI may not sufficiently address the risks that AI can pose".3 It is therefore critical that any proposed AI-specific regulation addresses those risks, including by prohibiting certain uses of AI which pose an unjustified risk to fundamental rights. Such prohibitions must be set out in primary legislation, in order that they are subject to a democratic process. Other jurisdictions (including the EU and the US) are beginning to do so. As the UK fails to prohibit certain uses of AI in law, we are falling behind in ensuring adequate protections against the risks of AI, and clarity for those who use and are affected by it.
Signatures include:
Public Law Project
Liberty
Big Brother Watch
Just Algorithms Action Group (JAAG)
Work Rights Centre
the3million
Migrants' Rights Network
Helen Mountfield KC
Monika Sobiecki, Partner, Bindmans LLP
Dr Derya Ozkul, Senior Research Fellow, Refugee Studies Centre, University of Oxford
Child Poverty Action Group
Open Rights Group
Louise Hooper, Garden Court Chambers
Dr Oliver Butler, Assistant Professor in Law, University of Nottingham
Birgit Schippers, University of Strathclyde
Connected by Data
Professor Joe Tomlinson, University of York
Welsh Refugee Council
Association of Visitors to Immigration Detainees (AVID)
Statewatch
Dave Weaver, Chair of Operation Black Vote
Lee Jasper, Blaksox
Sampson Low, Head of Policy, UNISON
Shoaib M Khan
Tom Brake, Director, Unlock Democracy
Asylum Link Merseyside
Fair Trials
Clare Moody, Co-CEO, Equally Ours
Isobel Ingham-Barrow, CEO, Community Policy Forum
Jim Fitzgerald, Director, Equal Rights Trust
With the European Parliament and Council of the EU heading for secret trilogue negotiations on the Artificial Intelligence Act, an open letter signed by 61 organisations - including Statewatch - calls on the Spanish Presidency of the Council to make amendments to the proposal that will ensure the protection of fundamental rights.
Civil society public letter on the proposed French law on the 2024 Olympic and Paralympic Games condemns a legal proposal to deploy algorithmic surveillance cameras in public spaces. The law would make France the first EU country to explicitly legalise such practices, violate international human rights law by contravening the principles of necessity and proportionality, and pose unacceptable risks to fundamental rights, such as the right to privacy, the freedom of assembly and association, and the right to non-discrimination.
"New technologies, particularly digital technologies, are transforming the ways in which human rights are impeded and violated around the world," says a damning new report by the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Fionnuala Ní Aoláin. The report "addresses the intersection of counter-terrorism and preventing and countering violent extremism with the use of new technologies," and condemns "the elevation of blinkered security thinking that has accompanied a particularly restrictive approach to countering terrorism".
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.