Source: Deep Learning on Medium
More Proof You’re Not That Paranoid, The Limitations of Deep Learning, and Google Fired Another Labor Organizing Worker
This week in tech
The New York Times opinion section ran a bracing report about the extraordinary proliferation of location data and the associated obliteration of privacy. Location data basically impossible to anonymize, as Paul Ohm comments in the piece, “D.N.A. is probably the only thing that’s harder to anonymize than precise geolocation information.” The Times reporters describe “easily” identifying people in the dataset and creating a veritable diary of their life. They found an Amazon engineer who took an interview and Microsoft, a singer who performed at Trump’s inauguration, a senior Department of Defense official who went to the Women’s March, and more.
Such datasets have unnerving implications well outside their “intended” use in advertisements, which isn’t even that benign to begin with. Stalkers, abusers, unscrupulous employers, paranoid spouses, law enforcement officials, political opportunists, journalists and so many others will find uses for this data. The disruptive potential is enormous and, given the prior decade, probably not in a good way.
Throughout 2019, data privacy has been one of Teb’s Lab’s top issues. We have written about: the near impossibility of anonymizing datasets, especially when there are so many data leaks; interviewed Paul Francis of the Max Planck Institute about how the field of data anonymity should proceed in this environment; argued that the exploding data stockpiles and costly machine learning algorithms deployed by surveillance capitalism are a serious environmental problem; and reflected on the political and social implications of the attempts to track, measure, and manipulate us. I expect data privacy will be a defining issue of the 2020s, and that paranoia will continue to be a best practice.
Several experts believe AI is headed for a slowdown. While deep learning has brought some incredible advances over the last decade, the costs of achieving state of the art results have skyrocketed. One result is that monied industrial research institutes like Alphabet’s DeepMind have been driving progress while university researchers are left behind. Another implication is that while the absolute results of many tasks have clearly improved under the deep learning regime, the improvements per computational unit have not increased nearly as dramatically. In a Wired interview, Facebook’s head of AI Jerome Pesenti said many domains have already “hit a wall” in regards to this computational cost.
Presenti described some of deep learning’s other limitations too, “It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding.” We’ve seen facial recognition software propagate racial bias, as yet another report confirmed this week. We’ve also seen a deadly example of this lack of common sense when Uber’s self driving cars couldn’t recognize a pedestrian outside of a crosswalk.
Yoshua Benigo, another AI researcher who has been critical of deep learning’s limitations, explained in an interview with IEEE Spectrum that researchers like him are not strictly opposed to deep learning: “Researchers are looking to find the places where it’s not working as well as we’d like, so we can figure out what needs to be added and what needs to be explored.”
Another Google employee claims they were fired for their role in labor organizing. There is already a federal investigation into the legality of four previous firings. One interpretation of the events is that Google isn’t worried about the investigation. They’re doubling down on the behavior in question, after all. If that’s the case, it’d be a classic corporate move: Google will get a fine that they can easily pay, and they’d rather quash unionization attempts which could be much more costly in the long run.
Teb’s Favorite Tidbits: