Can the EU make AI “trustworthy”? No – but they can make it just

Topic
Country/Region
EU

European Digital Rights sets out the main points of their response to the European Commission's consultation on "trustworthy AI".

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

"How to ensure a “trustworthy AI” has been highly debated since the European Commission launched its White Paper on AI in February this year. Policymakers and industry have hosted numerous conversations about “innovation”, “Europe becoming a leader in AI”, and promoting a “Fair AI”.

Yet, a “fair” or “trustworthy” artificial intelligence seems a far way off. As governments, institutions and industry swiftly move to incorporate AI into their systems and decision-making processes – grave concerns remain as to how these changes will impact people, democracy and society as a whole.

EDRi’s response outlines the main risks AI poses for people, communities and society, and outlines recommendations for an improved, truly ‘human-centric’ legislative proposal on AI. We argue that the EU must reinforce the protections already embedded in the General Data Protection Regulation (GDPR), outline clear legal limits for AI by focusing on impermissible use, and foreground principles of collective impact, democratic oversight, accountability, and fundamental rights. Here’s a summary of our main points."

Can the EU make AI “trustworthy”? No – but they can make it just (EDRi, link)

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error