16 January 2025
The EU is preparing guidance on the definitions and prohibitions contained in the AI Act, which was agreed last year. A statement signed by more two dozen organisations and individuals, including Statewatch¸ says the guidance must be centred on upholding fundamental rights.
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
Image: Joseph Gage, CC BY-SA 2.0
Risk factors
The AI Act was approved last June after lengthy and complex negotiations, resulting in a lengthy and complex law.
The Act requires that the EU produce an array of guidelines to help developers and users of AI systems interpret it.
One set of guidelines is supposed to help companies and institutions understand what exactly an AI system is, and which practices are prohibited by the Act.
The Act already contains a lengthy definition of the term “AI system,” though it is open to interpretation.
The Act also sets out a number of “prohibited AI practices.” These include remote biometric identification (such as public facial recognition), “social scoring,” predictive policing and emotion recognition.
However, as the statement underscores, the Act contains “various grave loopholes when it comes to the protection of fundamental rights, particularly in the areas of policing and migration.”
Those loopholes mean there are various ways supposedly prohibited practices could be employed by policing or border agencies.
AI guidelines
The guidelines will be published by the AI Office, a new EU body responsible for overseeing the AI Act, and are supposed to make clear how the law should be interpreted.
The statement, coordinated by the AI Act civil society and the #ProtectNotSurveil coalitions, “urges” the AI Office to include a number of elements that it says are “a necessary basis for fundamental rights-based enforcement” of the Act.
It underscores that the guidelines must make clear that even “comparatively ‘simple’ systems” must be considered as AI systems, so that the Act applies to them: “Such systems should not be considered out of scope of the AI Act just because they use less complicated algorithms.”
Prohibitions and loopholes
The statement also focuses on systems that the Act prohibits – for example, those for remote biometric identification, “social scoring,” predictive policing and emotion recognition.
There are already multiple loopholes and exceptions that will allow many of the “prohibited” systems to be used in certain circumstances. A forthcoming Statewatch report will analyse this issue in detail.
The statement calls for the guidelines to clarify the prohibitions, “to prevent the weaponisation of technology against marginalised groups and the unlawful use of mass biometric surveillance.”
With regard to predictive policing, the statement calls for the guidelines to make clear that “predicting ‘risk of committing a criminal offence’ includes all systems that purport to predict a wide range of behaviours that are criminalised and have criminal law and administrative consequences.”
The statement also calls for the guidelines to “strengthen the language on remote biometric identification (RBI) to prevent forms of biometric mass surveillance.”
Putting fundamental rights at the centre
The guidelines must also “ensure that human rights law, in particular the EU Charter of Fundamental Rights, are the central guiding basis for the implementation,” says the statement.
It calls for all AI systems to be “viewed within the wider context of discrimination, racism, and prejudice.” This means the prohibitions “must be interpreted broadly in the context of harm prevention,” it argues.
Statement: Human rights and justice must be at the heart of the upcoming Commission guidelines on the AI Act implementation
On 11 December 2024, the European Commission’s consultation on its Artificial Intelligence (AI) Act guidelines closed. These guidelines will determine how those creating and using AI systems can interpret rules on the types of systems in scope, and which systems should be explicitly prohibited.
Since the final AI Act presents various grave loopholes when it comes to the protection of fundamental rights, particularly in the areas of policing and migration, it is important the guidelines clarify that fundamental rights are the central guiding basis to enable meaningful AI Act enforcement.
More specifically, we urge the AI Office to ensure the upcoming guidelines on AI Act prohibitions and AI system definition include the following as a necessary basis for fundamental rights-based enforcement:
Lastly, we note the shortcomings of the Commission’s consultation process: notably, the lack of advanced notice and a short time frame for submissions, no publication of the draft guidelines to enable more targeted and useful feedback, lack of accessible formats for feedback, strict character limits on complicated, and at times leading, questions which required elaborate answers — for example, Question 2 on the definition of AI systems only asked for examples of AI systems that should be excluded from the definition of AI, hence allowing to narrow and not widen the definition of AI.
We urge the AI Office and the European Commission to ensure that all future consultations related to the AI Act implementation, both formal and informal, give a meaningful voice to civil society and impacted communities and that our views are reflected in policy developments and implementation.
As civil society organisations actively following the AI Act, we expect that the AI Office will ensure a rights-based enforcement of this legislation, and will prioritise human rights over the interests of the AI industry.
Signatories
Organisations
Individuals
Changes to data protection law proposed by the UK government threaten to eliminate protections for individuals against automated decision-making. An open letter signed by almost 20 organisations, including Statewatch, calls on the government to ensure that this does not happen. "The government should extend AI accountability, rather than reduce it, at this critical moment," says the letter.
The fortification of Europe’s borders is inherently linked to the use of digital technologies. Indeed, the process would be unthinkable without them. From the biometric passports and automated gates used at border crossing points to the drones, sensor systems and detection technologies used to prevent and detect unauthorised migrants, digital technologies are crucial to a political project that seeks to give state authorities increased knowledge of – and thus control over – foreign nationals seeking to enter the EU. Vast quantities of public funding have been used to develop and deploy these technologies, and unless there is significant public and political opposition to the project, it is likely that the EU will provide far more money in the years to come. Given the harm caused by the ongoing reinforcement of Fortress Europe, and the myriad more beneficial ways in which those funds could be spent, that opposition is urgently needed.
Last month, the Italian privacy authority fined Trento city council €50,000 for the deployment of two artificial intelligence-driven urban surveillance projects that violated data protection rules. The two projects, which were funded by the EU, were accompanied by a third research project that avoided sanction from the privacy authority, as no data processing has so far taken place under its auspices.
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.