How AI Mimics Shocking Racism and Prejudice From the Real World

Source: Artificial Intelligence on Medium

How AI Mimics Shocking Racism and Prejudice From the Real World

Facial recognition software and self-driving cars have ‘problems’ with darker skin

Photo by Markus Spiske on Unsplash

In 2018, it was revealed that Amazon’s facial recognition software, ‘Rekognition,’ matched Congresspeople’s headshots with photos from inside a mugshot database.

In total, 28 members of Congress were falsely identified as other people that had previously been arrested for committing criminal offenses.

The false matches were disproportionate toward people of color, including six members of the Congressional Black Caucus, among these was civil rights activist Rep. John Lewis.

In July of 2018, the American Civil Liberties Union conducted an independent analysis using the same default settings that Amazon’s Rekognition software uses.

This analysis ran a check on the whole of Congress against 25,000 publicly available arrest photographs.

The results showed that 40% of the false matches revealed in this analysis were people of color, even though they account for only 20% of Congress.

Image Source: ACLU.org

In January of 2019, it became apparent that Amazon had failed to fix the problems plaguing the Rekognition software.

The software, which is used by both police and Immigration and Customs Enforcement, was now struggling to identify both light-skinned women, and anyone with darker skin.

According to MIT and the University of Toronto, research revealed that when presented with facial pictures of women with light skin, Rekognition labeled 19% of them as men. However, women with dark skin were labeled as men 31% of the time.

A.I systems designed to help autonomous cars navigate our roads are having similar problems with racial bias.

Graduate Researchers from the Georgia Institute of Technology have found that state-of-the-art object recognition systems detect pedestrians with darker skin tones less accurately.

The researchers ran tests on eight separate image-recognition systems against a pool of pedestrian images. They split the pedestrians into groups with lighter and darker skin tones following the Fitzpatrick skin type scale, which classifies human skin color.

The accuracy of detection was found to be lower by five percentage points for the group with darker skin. This was still the case when the study used variables such as times of the day and obstructed views.

Through further analysis, the researchers found that the bias was probably caused by two things: too few examples of dark-skinned pedestrians and too little emphasis on learning from those examples. They say the inclination could be mitigated by adjusting both the data and the algorithm.

What can we do in 2020?

Image Source: UnSplash.com

These problems arise from the fact that A.I systems must have access to large and diverse data sets that helps to train algorithms and maintain the principle of fairness.

Therefore it’s time to start building ethical and inclusive AI systems that respect both human dignity and rights.

By working to reduce the exclusion of individual races and genders, and enabling marginalized communities to engage in both the development and governance of AI, we can work toward creating systems that encourage full-spectrum inclusion.

There is also a dire need or business to create ethical frameworks when they feed data into machine learning technologies. Data must be unbiased and transparent, especially when programming AI that will work with the general public.

As well as lawmakers, technologists, and researchers, this journey will need storytellers who work to embrace the search for truth through science and art.

Storytelling has the power to change perspectives, encourage action, alter damaging patterns, and reaffirm to others that their experiences matter.

We need to take action by using our collective voices to call for more diversity in the field of Artificial Intelligence.