In Code We Trust?

Original article was published by Jay Aoyama on Artificial Intelligence on Medium

Nearly 100 years ago, sociologist William Ogburn coined the term cultural lag to describe the disparity between rates of change in «material» and «non-material» culture [1]. This distinction is best exemplified in the relative impoverishment of our outdated cultural and ethical canon for making sense of the breakneck pace of technological advancement [2]. I argue that we are under-equipped to fully discern– or much less navigate– the complex and unprecedented ethical quandaries that are raised by technology. In what follows, I will explain how the opacity of technological systems leads to ethical ambiguity. I will then explore two ways in which this process threatens to insidiously degrade or subvert fundamental human values (e.g., freedom, equality, justice):

  1. By exploiting our ethical commitments to manipulate our behaviour in order to serve some non-moral end (i.e., profit)
  2. Promising a technological solution to systemic problems that can only be resolved through compassion and justice


If we do not understand how our technologies work, then we may be oblivious to the harm that they cause. Human cognitive capacities are finite. This much is trivial. How an artificial intelligence works might well be beyond the understanding of a single individual, or even that of a group of computer scientists[3]. The sophistication of technology is racing towards the ineffable[4]. As a result, the ramifications of misimplemented technology can be deep and far-reaching long before they are even recognised by our most diligent sentinels[5]. Technology by its very complexity stands to blind us to the moral dilemmas it brings about. And if we can’t see them, then we can’t correct them.

The study of algorithmic bias concerns the various ways in which computer systems might systematically perpetuate socioeconomic inequality[6]. A 2016 investigation found that COMPAS, an algorithm used to calculate prisoners’ risk of recidivism as a determinant for parole appeals, was statistically biased against black defendants[7]. COMPAS overestimated black appellees’ recidivism and underestimated that of white appellees. Instead of introducing fairness and objectivity into the parole appeal process, COMPAS ended up amplifying the very prejudice it was meant to eradicate. COMPAS continues to be used to assess parole candidates, despite recent evidence showing that it is no better than laymen at predicting recidivism[8].

But let’s suppose that we are able to quickly and reliably detect these sorts of situations. Unfortunately, our institutions are ill-equipped to offer the oversight necessary to uphold fundamental human values in the age of technology. Our legal and judicial processes are by their very nature protracted and reactive, rather than proactive. Technological advancement leaves behind a legally grey wake within which companies such as Airbnb, although inspired out of the best intentions, can quickly bring about serious ethical quagmires in the course of their meteoric expansion[9]. Technology brings us into uncharted ethical territory wherein human values are compromised– whether intentionally or ecologically– by profit motives.


We need look no further than the present day for a pressing and yet unresolved example of our values made subservient by technology. The addictiveness of social media is well-documented[10]. Content on social media newsfeeds is personalised by algorithms based upon users’ activity[11]. The more users interact with the site, the more ads they view. Advertisements comprise the main source of revenue for social media companies[12],[13]. Thus, attention becomes the currency, and attention can be farmed by exploiting basic and predictable psychological tendencies, such as moral impulse. It is in this sense that our commitment to political ideals can be effectively hijacked for financial gain.

An increasing number of people are using social media as their primary news source[14]. But content curation is not a veritistic process. Algorithms tend to favour the circulation of sensationalist «clickbait» that appeals to emotions[15] and that confirms preexisting ideological beliefs. This content is then targeted towards the user groups who are most likely to engage with it[16], thus cultivating disinformation and political polarisation[17]. Democracy at large is subject to the emergent effects of content curation[18]and its ulterior motives, whether financial or political. The algorithms used by Facebook to personalise users’ newsfeeds are proprietary16. This means that at best, the subversion of human values is an accident. At worst, it is completely intentional.


That said, there is no greater scandal than when we place our hope in technology to be the ultimate solution to systemic problems of government and economy. This is not to say that technology cannot be a force of good in the world. Rather, the daydream of technological salvation is dangerous because it can distract us from addressing the underlying root causes of our problems, which are often structural rather than technological in nature.

The development of genetically modified crops is often proposed as a solution to widespread poverty. The general pitch is: if we could genetically engineer miracle crops that would improve yield, then there would be enough food to go around[19]. Yet, the problem is not that there is not enough food to go around[20]. In fact, enough food was produced in 2009 to feed 1.5× the global population, or 10 billion people[21]. Poverty is a consequence of political instability, environmental catastrophe, and ubiquitous waste (among other things)20.

The myth of food scarcity merely masks the true source of global famine. But, even worse, the proliferation of GM crops might give rise to new agroindustrial hegemonies. GM crops are patented, and the measures used by companies to protect their intellectual property more often than not put those suffering from starvation in an economic stranglehold[22]:

  • Fortunately never implemented as a result of global activism, terminator seeds are engineered so as to only be viable for one generation, ensuring that farmers will have to repurchase seeds every season.
  • GM crops are often engineered to be specifically resistant to proprietary herbicides produced by the same company that sells the seeds. Farmers are thus forced into buying these herbicides.
  • Manufactured monopolies on seed availability limit farmers’ options and force them to buy GM crops. Reduced regional biodiversity gives rise to a monoculture that is uniformly susceptible to crop failure from insect and weed resistance.

The lesson here is simple: we don’t need a «superplant» to solve global famine. We don’t need an app to save the world. Technological messianism reduces comparatively intractable problems of governance to neat and tidy problems of engineering. But this project is fundamentally misguided because many human affairs cannot be bypassed through technological innovation, but must be mediated through compassionate conciliation.


The road to dystopia is paved with good intentions. As it stands, we are woefully unprepared for the ethical challenges brought about by technology. In this essay, I have tried to sound a warning about technological proliferation unfettered by philosophical reflection. The trend pervades across disparate technological frontiers, from judicial algorithms to social media to GM crops. We can barely keep up with the ethical dilemmas engendered by present day technology. At the same time, the seductive mythos of technology obfuscates rather than clarifies the problems we face. What we need now is not more cutting edge technology. The next pivotal breakthrough in technology will be when the gap between ethics and technology is closed.

¹ Ogburn, William Fielding. Social Change with Respect to Culture and Original Nature. New York: Viking, 1922.

² I mean «technology» in the broadest sense: any application of scientific knowledge for practical ends.

³ O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books, 2017.

⁴ Clarke, Arthur C. Profiles of the Future: An Inquiry into the Limits of the Possible. London: Pan Books, 1973.

⁵ These problems are amplified by orders of magnitude when we’re no longer the ones designing the systems we use:

Galeon, Dom. “Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans.” Futurism. Futurism LLC, February 19, 2018.

⁶ Kirkpatrick, Keith. “It’s Not the Algorithm, It’s the Data.” Communications of the ACM 60, no. 2 (2017): 21–23.

⁷ Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica. Pro Publica Inc., May 23, 2016.

⁸ Dressel, Julia, and Hany Farid. “The Accuracy, Fairness, and Limits of Predicting Recidivism.” Science Advances 4, no. 1 (2018).

⁹ Barron, Kyle, Edward Kung, and Davide Proserpio. “The Effect of Home-Sharing on House Prices and Rents: Evidence from Airbnb.” SSRN Electronic Journal, 2020.

¹⁰ Guedes, Eduardo, Antonio Egidio Nardi, Flávia Melo Campos Leite Guimarães, Sergio Machado, and Anna Lucia Spear King. “Social Networking, a New Online Addiction: a Review of Facebook and Other Addiction Disorders.” Medical Express 3, no. 1 (2016).

¹¹ Pariser, Eli. The Filter Bubble: What the Internet is Hiding from You. New York: Penguin Books, 2011.

¹² Johnston, Matthew. “How Facebook Makes Money.” Investopedia. Forbes Media, February 5, 2020.

¹³ Reiff, Nathan. “How Twitter Makes Money: Advertising Comprises the Bulk of Revenue.” Investopedia. Forbes Media, February 19, 2020.

¹⁴ David, Clarissa C., Rosel S San Pascual, and Eliza S. Torres. “Reliance on Facebook for News and Its Influence on Political Engagement.” Edited by Antonio Scala. PLoS One 14, no. 3 (March 19, 2019).

¹⁵ Owen, Laura Hazard. “One Year in, Facebook’s Big Algorithm Change Has Spurred an Angry, Fox News-Dominated– and Very Engaged!– News Feed.” Nieman Lab. Harvard University, March 15, 2019.

¹⁶ Manjoo, Farhad. “Can Facebook Fix Its Own Worst Bug?” The New York Times. The New York Times Company, April 25, 2017.

¹⁷ Bleiberg, Joshua, and Darrell M. West. “Political Polarization on Facebook.” Brookings. The Brookings Institution, July 29, 2016.

¹⁸ Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348, no. 6239 (June 5, 2015): 1130–32.

¹⁹ Nash, J Madeleine. “This Rice Could Save a Million Kids a Year.” TIME, July 31, 2000.,9171,997586,00.html.

²⁰ “2018 World Hunger and Poverty: Facts and Statistics.” World Hunger Education Service, n.d.

²¹ Holt-Giménez, Eric, Annie Shattuck, Miguel Altieri, Hans Herren, and Steve Gliessman. “We Already Grow Enough Food for 10 Billion People … and Still Can’t End Hunger.” Journal of Sustainable Agriculture 36, no. 6 (2012): 595–98.

²² “Do We Need GM Crops to Feed the World?” Canadian Biotechnology Action Network, December 2015.