Vestager warns against predictive policing in Artificial Intelligence – EURACTIV

Original article was published on artificial intelligence


Certain Artificial Intelligence applications including forms of predictive policing are ‘not acceptable’ in the EU, the European Commission’s Vice-President for Digital policy, Margrethe Vestager has said.

Delivering a keynote speech as part of Tuesday’s (30 June) European AI Forum, Vestager reflected on the pros and cons of employing certain AI applications in Europe, highlighting the problems that could emerge as a result of an irresponsible application of next-generation technologies.

“If properly developed and used, it can work miracles, both for our economy and for our society,” Vestager said. “But artificial intelligence can also do harm,” she added, highlighting how some applications can lead to discrimination, amplifying prejudices and biases in society.

“Immigrants and people belonging to certain ethnic groups might be targeted by predictive policing techniques that direct all the attention of law enforcement to them. This is not acceptable.”

AI White Paper feedback 

The EU’s digital czar also reflected on the recently closed public consultation on the Commission’s White Paper on Artificial Intelligence, which was published in February.

The White Paper laid our the Commission’s approach to forging a future regulatory landscape for AI, noting that technologies carrying a high-risk of abuse that could potentially lead to an erosion of fundamental rights should be subjected to a series of new requirements.

Mid-June marked the deadline for stakeholders to submit feedback on the White Paper, and concerns raised ranged from the use of biometric technology to the operation of Automated Decision Making (ADM) software.

For their part, rights group Access Now was one such organization that called for a blanket ban to some technologies, including ‘uses of AI to make behavioural predictions with a significant effect on people’ such as predictive policing technologies.

“No safeguard or remedy would make indiscriminate biometric surveillance or predictive policing acceptable, justified or compatible with human rights,” a statement from Fanny Hidvegi, Europe Policy Manager at Access Now, read.

Along this axis, lobby group European Digital Rights (EDRi) also called for a ban on predictive policing software in the EU, saying that it represents an ‘impermissible use’ of AI.

In their submission to the public consultation, rights group Access Now renewed calls for the explicit ban on the use of certain technologies, including indiscriminate biometric surveillance software, facial emotion analysis applications, and the use of AI systems between borders.

The UK has been accused of conducting mass-scale predictive policing across forces all over the country. A 2019 report from Liberties stated that UK law enforcement authorities had been using predictive policing programs to predict where and when crime will happen, as well as those likely to commit offences.

In this vein, Vestager noted on Tuesday how the 1,200 respondents to the public consultation raised a number of concerns with regards to the possible erosion of rights brought about by improper use of AI, as well as the challenges in distinguishing between high-risk and low-risk applications.

“Most of these contributors agreed that AI, if not properly framed, might compromise our fundamental rights or safety,” she said. “Many of them agreed with us that we should focus our attention on high-risk applications. But rather few of them were convinced that we had already found the silver bullet that allows us to distinguish between high and low-risk applications.”

Germany Presidency

Meanwhile, speaking a day before Germany takes over the EU Council Presidency, Minister of State for Digitalisation, Dorothee Bär, laid out her country’s three priorities in the field of Artificial Intelligence for the next 6 months.

These include promoting innovation with plans to “establish world-leading AI systems in key ecosystems,” meeting a growing demand for computing capacity by improving access to high computing power and making the most out of innovative data sets and, more generally, pursuing a people-centered, innovative-friendly framework for AI in Europe.

For its part, Germany’s feedback to the Commission’s White Paper said that ‘clear, legal requirements’ are needed for the use of certain biometric identification systems, due to the risks to civil liberties that may arise.

However, referring to the restrictions in place as part of the processing of biometric data as part of the GDPR, Germany’s feedback also noted that the “ban on processing biometric data under data protection law only limits the use of such systems.”

A follow up from the Commission following the consultation is scheduled for early 2021.

[Edited by Benjamin Fox]