What Would Happen if AI Could Perfectly Recognize Faces?

Original article was published by Luisa S. on Artificial Intelligence on Medium


What Would Happen if AI Could Perfectly Recognize Faces?

Facial recognition — Price, benefits, and consequences

Michel de Montaigne and the absolute certainty

In 1560, Michel de Montaigne took part in the public process of Arnaud du Tilh, who was sentenced to death for having stolen Martin Guerre’s identity. He had been able to do that because Martin Guerre didn’t have any way to protect his personal data; he had been away for a long time and they looked a lot alike, so much that even the wife of Martin Guerre hadn’t noticed the difference.
The one of Arnaud du Tilh and Martin Guerre is an extraordinary case, hardly replicable, told with astonishment by Montaigne to analyze the human attempts to reach absolute certainty. How could men possibly have the absolute certainty of what they are thinking in a moment?

If the case of Arnaud du Tilh and Martin Guerre happened nowadays, we would have many tools to solve the mystery: finger or palm image, iris or voice print; we could even suggest the use of artificial intelligence for facial recognition. However, every time we take such a big step, we have to consider all the consequences, and in this instance, they could be catastrophic.

Photo by teguhjatipras on Pixabay

How facial recognition works

Facial recognition is a technology capable of identifying people in photos, videos, or in real-time. It uses biometrics to map facial features and it compares the information with a database: the larger is the database, the more effective is the identification.

When a facial recognition software looks at your face, it sees a geometric face, with rules, traits, numbers. Biometric identifiers are unique to each specific person and, as it is based on calculations and measurements, it has mathematical precision. A method developed by Kazemi and Sullivan, two Swedish Computer Vision researchers, called One Millisecond Face Alignment with an Ensemble of Regression Trees, detected 68 landmarks spread over a common face. These 68 points are enough to draw a map of your face: it will become your signature, and you’ll be in the database forever.

The 68 face landmarks of Kazemi and Sullivan

Why we should totally do that

The facial recognition’s greatest selling point is safety. If artificial intelligence could perfectly recognize faces, global security would benefit the most. It would help for both petty and serious crimes: for the first ones because often the police don’t have time for them, so shop owners and small businessmen can install camcorders and facial recognition systems to monitor people and identify criminals; for the second ones, the stakes are even bigger. To give an example, the number of children who disappear every year is around 800.000, about 2.000 a day. This is crazy, horrifying, especially when we start to think about what happens to them after that; they’ll come home, one day, if they are very very lucky.
Maybe, facial recognition could help. If an artificial intelligence system was so well-trained and widespread to look for these children, all over the world, perhaps these numbers would decrease, or at least would increase the number of the ones who come back home.

Apart from that, which is the most important application, facial recognition has a lot of other uses: it is used to unlock phones, to take the roll in some colleges, to identify passengers at the departure gates of the airports. And, finally, last but not least, you can use it to recognize your friend in photos on your phone, so you can look for all the pictures with them.

Photo by Tobias Tullius on Unsplash

Why we absolutely should not do that

However, looking for pros was the easy part. The bad consequences of a technology like that are mainly two: privacy and biases.

Privacy is the most debated issue of the last few years. The simple fact is, the price of a well-trained artificial intelligence system is data: the more we ask an AI, the more we have to fill it with information. Privacy is a fundamental human right, protected in the constitutional statements of over 130 countries.

Philosophically speaking, according to the philosopher of information Luciano Floridi, we have four types of privacy: physical privacy, achieved by reducing the others’ power of invading our personal space; mental privacy, achieved by not allowing others to manipulate our mind; decisional privacy, achieved by not allowing others to interfere with our decisions; finally there is informational privacy, which is the freedom from informational inferences or intrusions and which is achieved keeping some of our personal facts to ourselves. The one at risk because of the development of technologies is informational privacy.
In a world like ours, the difficulty that information may run into when it flows from the issuer to the receiver is called “informational friction”. While old technologies, like the television or the radio, conditioned the informational friction only decreasing it, the new ones operate in two directions, both decreasing and increasing it, so we can increase or decrease the level of privacy we have.

A well-trained artificial intelligence system could significantly improve global security, but it would bring to an unprecedented phenomenon of mass surveillance and to the suppression of the right to live free from constant government supervision. Besides, the damage would be enormous if the data were stolen from hackers. The budget is balanced… more or less.

Photo by Tim Mossholder on Unsplash

Back in 1988, a British medical school was accused of discrimination because it was using a software to choose which applicants should be interviewed; the UK Commission for Racial Equality established that the software was biased against women and non-European named.

Today, more than thirty years later, algorithms still have the same problem. AI is our daughter and she inherits all our flaws. If we are not able to build an appropriate data-set, even if we do that unconsciously, the algorithm will learn all our biased decisions, historical and social inequities; if it will receive a sufficient number of cases of, for example, pedophile priests, it will deduct that all priests are pedophiles and all pedophiles are priests; if the most straight-A students of its database are men, it will deduct that a woman would be unsuited for the medical school; and so on.

There are countless concrete examples: in 2014, Brisha Borden was accused of trying to steal a bike from a kid; the previous summer, Vernon Prater shoplifted from a store. They were both booked into jail and an algorithm predicted the likelihood of each committing a future crime: Borden had a high risk, Prated had a low risk. Brisha Borden was an 18 years old black girl, Vernon Prater was a 41 years old white man. Ironically, while Borden hasn’t been charged with any new crimes, Prater will stay in prison for at least another six years for robbing a warehouse. In some US states, such algorithms are used even for judging during criminal sentencing.

Photo by Tingey Injury Law Firm on Unsplash

The tug of war

At this point, it’d be wise to say we don’t know the answer. Both the options have solid pros and important cons, and we can’t just ignore that. Maybe, one day, our knowledge of artificial intelligence, data science, and machine learning will be such that we can avoid biases, or maybe, we will be so good at protecting our data that we won’t fear attacks of hackers and data thefts. In any case, the consequences of mass surveillance would be extreme. But again, what else matters, if it could save even just one child?

Bibliography

  1. E. V. Telle, Montaigne et le procès Martin Guerre, in Bibliothèque d’Humanisme et Reinassance, Libraire Eugénie Droz, 1975, vol. 37, n. 3
  2. V. Kazemi e J. Sullivan, One Millisecond Face Alignment with an Ensemble of Regression Trees, IEEE Conference on Computer Vision and Pattern Recognition, 2014
  3. L. Floridi, The Ontological Interpretation of Informational Privacy, Ethics and Information Technology, Springer, 2005