‘Aggression Detection’ Is Coming to Facial Recognition Cameras Around the World

Original article was published by Dave Gershgorn on Artificial Intelligence on Medium


‘Aggression Detection’ Is Coming to Facial Recognition Cameras Around the World

Russian firm NTech Lab plans to roll emotional detection features worldwide as soon as 2021

An employee at NtechLab, the company that won the city’s tender to supply the facial recognition technology, demonstrates the technology during an interview with AFP on February 5, 2020. Photo: Kirill Kudryavstev/AFP/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

NTech Lab, makers of Russia’s expansive real-time facial recognition surveillance system, is set to roll out “aggression detection” as well as “violence detection” features, which will flag law enforcement when the algorithm thinks someone is committing or about to commit violence starting in 2021.

The firm recently got an injection of cash from the Russian government and an unnamed Middle Eastern partner. Flush with $15 million in new funds, it’s now eyeing expansion into Latin America, the Middle East, and potentially Asia, according to a report from Forbes.

But of course, when it comes to recognizing aggression, the algorithm’s interpretation of a situation could have a large margin of error.

While previous studies have found facial recognition to be racially biased to perform worse on people with darker skin, tests on detecting emotion and skin tone are even more damning.

A study that tried to gauge the emotions in 400 photos of NBA players found that Black players were consistently found to be more “angry” than white players.

“Until facial recognition assesses black and white faces similarly, black people may need to exaggerate their positive facial expressions — essentially smile more — to reduce ambiguity and potentially negative interpretations by the technology,” study author Lauren Rhue wrote about the research.

Let’s keep it short and sweet: NTechLab is spreading an untested, morally corrupt algorithm to the governments least likely to quibble over the ethics of how it’s used. And it’s going to make a lot of money doing it.

Now, a quick pivot to some of the most interesting A.I. research of the week:

Generating fake disasters

As most modern A.I. is the practice of getting machines to more efficiently find patterns, the toughest task for algorithms is to deal with extremes that don’t fit any previous information it’s seen. New research tries to generate realistic and extreme data, in an attempt to synthesize the data that would trip up algorithms in the real world. The example shown models extreme rainfall in the U.S that could lead to flooding, which theoretically could lead to better disaster prediction.

What is an A.I. agent, really?

This one is a few months old, but I found it this week and really enjoyed it. DeepMind researcher Anna Harutyunyan dives into the philosophical background for how we think about artificial intelligence algorithms as entities that can act in our world, from Descartes to Ada Lovelace to Alan Turing. This is one to read for when dinner parties come back and you need something smart to say about A.I.

Computer science as a lottery

Here’s another essay worth reading. Google researcher Sara Hooker makes the argument that the best ideas don’t always become widely adopted in computer science. Rather, it’s the ideas that play nicest with the prevailing computer software and hardware of the day. This perspective is valuable today because it asks what big A.I. firms might be missing when devoting millions of dollars to training a single enormous algorithm that might eke out a few extra percentage points on a popular benchmark, versus spending that money on other, less obvious approaches.