Facial recognition: the sinister trend quietly entering our lives

Original article was published by Savannah Rowe on Artificial Intelligence on Medium


Facial recognition: the sinister trend quietly entering our lives

Society engineers a natural fear of the unknown. Such attitudes have the best representations in humans’ relationships with high-tech. Technology can be a conundrum, but typically a force of good. However, artificial intelligence is a fearfully fascinating concept that gets overshadowed by anxiety. We cannot escape the narratives of robots rebelling against their makers. Over the years, movies have hard-wired the imagery of an apocalyptic world where humans become passive and sluggish. Reality is different from your expectations. There is still some time until artificial intelligence will advance to the point we see on the screens.

At the moment, the concerns are more rational, not inspired by the classic warfare between robots and humans.

Facial recognition is already a piece of technology that some might use daily. Many manufacturers allow users to unlock smartphones with their facial features. Nowadays, facial recognition participates and contributes to an array of operations. Putting bad people behind bars before they can cause more harm is something we cannot disagree with ethically. Pinpointing missing or kidnapped people’s location is also a benevolent task.

But what about retail stores implementing facial-scanning systems for determining customers’ age and gender? Tesco’s plan has two sides. One is the attempt to scan potential “people of interest.” Relatively understandable for those wanting to protect their property. But the other could be done for enhancing digital marketing strategies.

Under-the-hood operations of facial recognition

Facial recognition happens in a matter of seconds. Internally, such technologies rely on biometrics to identify facial features. For instance, it will estimate the distance between your eyes and between your chin and forehead. Such data then transforms into a mathematical entry and travels into a database.

Users that willingly opt for facial authentication provide their photo to make the verification process possible. The picture will be compared to the features of others when they attempt to unlock your phone.

The problem for me is that my face could end up in unknown databases. Additionally, if I have any lookalikes, the facial recognition system might indicate me as a person of interest. Since the accuracy of facial recognition is far from foolproof, misidentification, and wrongful convictions are possible.

The commotion about facial recognition and people’s privacy

To be fair, facial authentication is not the one triggering the heated outburst. It seems that experts prefer to draw a clear line between facial authentication and facial recognition. So, if you unlock your smartphone with your face, it is your choice. However, secretive facial recognition stirs the pot and becomes a breach of civilian privacy.

Suppose you invite an attractive person out on a date, and their parents ask the waiter to take your picture. Then, the parents run your photo through a facial recognition app to check whether you are marriage-material. A similar incident happened to a young man in Manhattan. The father uploaded the gentlemen’s photo to the Clearview AI app. While the parent got access to the app through business relations, he used it for personal research. How many similar investigations occur under our noses? Most importantly, which of them happen for more malicious purposes than protecting daughters from creepy guys?

Besides such concerns, citizens worry that facial recognition systems will lead to constant surveillance. As a cherry on top, agencies apply such technologies without disclosing it to the public. Such a situation happened in the UK when facial recognition technology was used in London’s King’s Cross area. Apparently, Argent (a property developing company), had implemented such surveillance without giving anyone a heads-up. For the most part, such installations of tech that scans our bodies should have compelling evidence to justify their usage.

So, many call for stricter bans on facial recognition usage in public spaces. Some states already prevent this technology from being applied to body cameras for police officers. Others choose to keep them off their streets, but privacy advocates worry that some instances could easily slip under the radar.

Does our physical safety come at the expense of our civil rights? Sometimes, it might, when the issue is negotiable and effective long-term. While facial recognition could mean more criminals’ arrests, it needs to be perfected before becoming widespread. Of course, there are many more radical opinions, suggesting that this technology is invasive by nature, and potential harm outweighs the benefits significantly. Another headache is the legislation mess surrounding facial recognition application. A balanced approach must be kept, and balance, just as peace, is challenging to achieve.

A database of faces?

For cross-checking potentially dangerous people, there needs to be an extensive database of information. In a way, such pools of data are nothing new, as law enforcement agencies employ similar ones. However, the new model indicates that such databases might become more universal. For instance, Australia reported to include all driver’s license and passport photos into state databases, leading to a nationwide one sometime in the future. Introduced as a way to combat ID fraud, it is accompanied by a colorful pallet of privacy concerns.

Imagine if such databases would be hacked? Oh, wait, you don’t have to imagine it. Clearview AI reported such an incident. Along with this news, came the accusation that Clearview continues to store data even after users delete photos from their social media profiles. If I were you, I would not want my old high school photos to be in any database, ever.

Of course, most of us would agree that law enforcement agencies could increase productivity and reduce crime rates by taking advantage of facial recognition. However, we need to solve the “false positives” problem until proceeding any further. Let’s use London’s Metropolitan Police and 2016–2017 trials to illustrate facial recognition’s potential inaccuracy. As reported, automated recognition technologies misidentified approximately 102 people. So, even if the intentions are relatively sound, the means are far from perfect just yet.