21 December 2020
The Protection and Security Advisory Group (PASAG) advises the European Commission on the content of the EU security research programme, which provides funds for research and development on new surveillance and security technologies. PASAG recently published a report entitled 'AI and security opportunities and risks: Towards a trustworthy AI based on European values', which argues that artificial intelligence (AI) "can have extensive application in public security and cyber security, if sufficiently large data sets are available," but calls for more training, research and education to make AI "secure, reliable, unbiased and explainable."
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
See: PASAG: AI and security opportunities and risks: Towards a trustworthy AI based on European values (pdf)
Recommendations of the report:
1. Basic research is necessary to make AI more secure, reliable, unbiased, and explainable. Current threats such as adversarial machine learning undermine the trustworthiness of AI and mitigations need to be researched. Assessments and metrics are needed to evaluate how reliable a given decision is.
2. AI’s impact on innovation cultures and new business models related to digital economy requires further research and case studies to generate wider understanding of AI’s infrastructural importance to the economy and society.
3. AI is pervasive and can have extensive application in public security and cyber security, if sufficiently large data sets are available. Research projects should explain why they expect significant progress and provide clear KPIs to measure success and error rates.
4. Current basic AI technologies are by default insecure by design and not trustworthy. This does not affect necessarily all use cases, but research projects should be aware of it and provide measures to mitigate these shortcomings where appropriate.
5. The Ethics Guidelines for Trustworthy AI should be used as guidance towards an AI based on European values.
6. Trustworthy AI requires trustworthy computing capabilities. Many AI applications are deployed into the cloud for learning and scalable production. The EU should promote cloud-computing services operating exclusively under EU legislation to protect data from non-EU access.
7. European data pools will make AI much more effective than national or regional ones. This will require responsible trade-offs between effectiveness of AI and fundamental rights such as privacy, especially in the public security sector. The data quality and homogeneity of merged data is crucial for success.
8. Development of defensive measures to detect and combat malicious use of AI. This includes also measures against fake news and deep fakes. This requires interdisciplinary understanding of attacks against AI and how AI can be used for attacks.
9. The talent pool for AI experts is very limited. Comprehensive education programs sponsored by the EU and member states are necessary to achieve competitiveness. The public security sector will need dedicated funding to successfully attract talent for a sustainable deployment of AI within the government sector. Interdisciplinary research is needed to understand the kinds of new skill sets that will be needed in the future not only to develop and operate new AI systems but to identify their potential societal impacts and how these need to be addressed.
A previous report by the PASAG looked at the issues of "synergies" between the next security research programme (entitled Horizon Europe and running from 2021-2027) and research undertaken through the forthcoming European Defence Fund (2021-27). The aim is to take advantage of overlapping areas of interest between civil and military research and development.
For an overview of issues related to the security research programme and proposals for the new budgets, see: Sci-fi surveillance: Europe's secretive push into biometric technology (The Guardian, link)
Background: Observatory: The European security-industrial complex
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.