Original article was published on Artificial Intelligence on Medium
We are holding space to grieve, to mourn, and we are also full of righteous anger. The killings of George Floyd, Breonna Taylor, Ahmad Arbery, Nina Pop, and Tony McDade are only the latest in what feels like an endless chain of police and vigilante violence against Black men, women, children, trans folks, and non-binary people.
At the same time, we recognize the intense power and possibility in the massive wave of multiracial mobilizations that is sweeping the country, even in the midst of the pandemic and in the face of brutal police repression. People everywhere are organizing to demand structural transformation, investment in Black communities, and deep and meaningful changes to policing in the United States of America. We know that criminal justice reforms alone are not enough to transform our society after hundreds of years of slavery, segregation, overpolicing and mass incarceration, disinvestment, displacement, and cultural, economic, and political erasure, but they are an important piece of the puzzle.
Yet even now, as we mobilize for Black lives, local, state, and federal police, as well as other agencies such as DEA, CBP, ICE, and various U.S. military forces, are deploying a wide range of surveillance technologies to collect, share, and analyze information about protesters. Many people understand that mobile phones double as surveillance tools. But law enforcement agencies are also gathering photo and video documentation of protesters by filming protesters with body cameras, smartphones, video cameras, and drones; using both government and commercial software systems to scrape social media for photos and videos; gathering footage from CCTV systems; gathering footage from media coverage, and more. Many police departments, including Minneapolis police, then analyze that footage — both in real time, and later on in the days and weeks after protests — and use facial recognition technology to attempt to identify individuals.
Face surveillance, the use of facial recognition technology for surveillance, thus gives the police a powerful tool that amplifies the targeting of Black lives. In addition, performance tests including the most recent gold standard government study by the National Institute for Standards and Technology has found that many of these systems perform poorly on Black faces echoing earlier findings from the Algorithmic Justice League. So not only are Black lives more subject to unwarranted, rights-violating surveillance, they are also more subject to false identification, giving the government new tools to target and misidentify individuals in connection with protest-related incidents.
We are in the midst of an uprising of historic magnitude, with hundreds of thousands of people already participating and potentially millions taking part in the days and weeks to come. At these scales, even small error rates can result in large numbers of people mistakenly flagged and targeted for arrest. Since many of these systems have demonstrated racial bias with lower performance on darker skin, the burden of these harms will once again fall disproportionately on Black people, further compounding the problem of racist policing practices and a deeply flawed and harmful criminal justice system.
Police are deploying these increasingly sophisticated surveillance systems against protesters with little to no accountability, and too often in violation of our civil and human rights, including first amendment freedom of expression, association, and assembly rights. These harms disproportionately impact the Black community based on historical patterns of discrimination and over-policing by law enforcement. Such patterns already lead to more frequent stops and arrests on a lesser standard of reasonable suspicion, compounded with other forms of discrimination in the criminal justice system (by judges, prosecutors, and risk assessment tools, among others) against Black lives that further lead to higher incarceration rates, lost job and educational opportunities, loss of livelihood, and loss of life.
The Algorithmic Justice League therefore urges that we reign in government surveillance of Black communities, in general, and police use of face surveillance technology, specifically. We have gathered resources to help organizers include demands to halt police use of face surveillance technology within broader campaigns for racial justice at municipal, statewide, and federal levels.
We are calling on every community, organization, and politician who is serious about racial justice to specifically include a halt on police use of face surveillance technology, among broader and sorely needed transformational policies.
This may seem daunting, but the tide is rising. In cities and states across the country, people have organized to successfully block the rollout of face surveillance technology:
On the municipal level, San Francisco became the first city to ban government use of facial recognition technology in 2019 with Oakland and Berkeley following suit. In Massachusetts Somerville, Brookline, Northampton, Cambridge, and Springfield have successfully halted government use of this technology. Next week, Boston is poised to join the list. This progression demonstrates the domino effect of successful advocacy in one city to influence change in neighboring communities.
On the state level, the State of California has enacted a three-year moratorium prohibiting police from using facial recognition with body cameras, which went into effect in January 2020. The State of New York is currently considering similar legislation prohibiting facial recognition in connection with officer cameras and has also introduced legislation proposing a moratorium on all law enforcement use. The State of Massachusetts is actively considering a broader moratorium on all government use of facial recognition — which would cover police use, in addition to other state agencies and officials.
On the federal level, since May 2019 there have been three (3) public hearings on facial recognition technology (linked below) that have taken place in front of the House Committee on Oversight and Reform to examine how facial recognition technology impacts our rights and emphasize its discriminatory impact on Black lives. At the first hearing (1) Neema Singh of the ACLU, (2) Claire Garvie of the Center on Privacy & Technology at Georgetown, (3) David A. Clarke School of Law Professor Andrew Ferguson of and (4) Former President of the National Organization of Black Law Enforcement Executives Dr. Cedric Alexander all testified along with Algorithmic Justice League founder Joy Buolamwini, who stated :
“These tools are too powerful, and the potential for grave shortcomings, including extreme demographic and phenotypic bias is clear. We cannot afford to allow government agencies to adopt these tools and begin making decisions based on their outputs today, and figure out later how to rein in misuses and abuses.”
Following this series of hearings, Congress is currently considering several initiatives that would limit the use of facial recognition technology, including legislation that would place limits on the use of face surveillance by federal law enforcement agencies.
Now is the time to build on the momentum of these successful initiatives.
Given the extent to which police power has been militarized and systematically weaponized against Black lives, it is more imperative than ever that we ensure that law enforcement cannot deploy face surveillance technology to suppress protests or infringe on civil rights and liberties.
If you have a face, you have a place in this conversation. The people have a voice and a choice, and we choose to live in a society where your hue is not a cue for the dismissal of your humanity. We choose to live in a society that rejects suppressive surveillance.
We choose to beat the drum for justice in solidarity with all who value Black lives.
— Algorithmic Justice League
Congressional Hearings on Facial Recognition Technology
About AJL: The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence. Our mission is to raise public awareness about the impacts of AI, equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate AI bias and harms. More at https://ajlunited.org.