Becoming a Comet



How To Design Benevolent Artificial Intelligences

Comet (Corpus: Omens and Portents) Adversarially Evolved Hallucination, 2017. Trevor Paglen. Image courtesy of the artist and Metro Pictures, New York.

1. Comet fever rising

A gushing shower from above, pouring viciously from a rupture in the sky, through the layers of atmospheric gases towards us, the starry-eyed earthlings. These might well be the affairs that will end the terrestrial life as we know it. But for now, this remains as an otherworldly portrayal, telling us that something uncanny is going on in the world.

The image Comet (Corpus: Omens and Portents) Adversarily Evolved Hallucination (2017) by artist Trevor Paglen, is made by an AI that has been trained to see things associated with portents throughout the history. He has produced a meta omen of sorts, by providing a deep learning network a training library of thousands of photographic images of the harbingers of bad fortune: comets, eclipses, rainbows and black cats. In essence, the resulting image shows what the machine sees when it glances the symbols of our fears about the unexpected and the extraordinary.

In ancient cultures the sudden appearance of a comet was considered to be a sign from the gods. As it disturbed the harmony of the sky, it were deemed to be a bad omen. “Comet fever” has been rife also in the modern times: “When Halley’s Comet appeared in the sky in 1910 as expected, many feared that civilization would be poisoned by prussic acid, which had recently been discovered to be a component of the tail. Clever wheeler-dealers sold comet pills to ward off this eventuality. And when the Hale-Bopp comet gave an impressive performance in the terrestrial sky in 1997, 39 members of the Heaven’s Gate sect committed suicide because they believed it would enable them to leave Earth and travel to an alien space ship, which was supposedly accompanying Hale-Bopp.”¹

This time the Comet, made by Paglen in collaboration with AI, hints about the arrival of a different kind of alien intelligence, with a new opportunity to believe and prepare. And the fever is here: the crowd, waiting aghast for the deep impact, includes devotees across the design community, contemporary art world, Silicon Valley tech elite, and mainstream media — all anticipating fervently what it means when machines will first behave more like humans, and then again more like a new species of machines.

2. Deep turn to prejudice

AI is usually thought of as a mixture of advanced computer software and hardware that can undertake self-learning perceptional, computational and physical tasks. It is generally expected to become the pertinent computational infrastructure that allows automation on unforeseen levels, providing machines an ability to do things automatically with us, for us and without us.

The good old fashioned artificial systems — such as IBM’s infamous Deep Blue that beat the reigning world chess champion Garry Kasparov in 1997 — relied on logical top-down models that represent the problem space, and brute computational force to calculate the outcomes by these general rules.

The recent advances in AI have resulted from a different approach dubbed deep learning. The deep learning systems are more analogous to biological neural networks, as they do not rely on symbols or language for their inputs or outputs. Artificial neural networks operate bottom-up with several computational layers that each analyze parts of the data that is perceived or provided for the system. They are self-learning machines that make sense of things without anyone specifically telling them what those things are.

Deep learning has shaped the whole field in only five years, but it is far from a magical solution for all the challenges of AI. As Gary Marcus, a professor of cognitive psychology in NYU, puts it: deep learning is still problematic because it is greedy (it requires massive amounts of training data), brittle (it is good only at things that it has been specifically trained for and bad in everything else), opaque (the parameters of neural networks are comprehensible only for the AI itself), and shallow (it has no innate knowledge or common sense about the world).²

How AI now looks at us and the world is what it is and what it does. “The processes of how these intelligences sense the world and think about the world are so closely coupled that we cannot separate them,”³ describes technology and design theorist Benjamin Bratton. There lies the real challenge too. By design — and even with massive amounts of computational power — the deep learning AI systems are prejudicial black boxes that are utterly unaccountable for their actions in the world, and always bounded by the ideals of their creators, the limitations of the imperfect data, their defined learning models, and the subjective classification of the outcomes.

3. AI won’t become benevolent by itself

We are witnessing the germination of several different deep learning artificial intelligences. Many are designed for some narrow purpose (Is this a cat?) and some for broader purposes (Drive me home!). But what is causing the most fever is the mirage of artificial general intelligence (AGI), so often gleaming on the dusty sky of the Black Rock Desert where the tech elite hallucinates at Burning Man.

The much-hyped promise of AGI is perhaps most vivid as told by the tale of the Singularity: a future where AI will overthrow human intelligence with its exponential growth. With its unprecedented capacity to wrench out meaning from the googolplex (the number of 1 squared 10 squared 100, or 1 followed by writing zeroes until you get tired) of events and interactions in the world, it could eventually solve all our imaginable problems.

This story has become a sort of a religion among a cadre of contemporary technologists, but its technological and philosophical premises are deeply problematic. The idea of the Singularity — as does most of the AI development in general — maintains a technocratic and reductionist fallacy of a world that can be modeled, quantified, understood, simulated, optimized and controlled, if only there is enough information and computational power available.

The first problem relates to Moore’s law, the expected exponential growth curve of computational power so often cited by the Singularitarians. It seems increasingly evident that the curve might not be exponential after all, but perhaps more akin to an S-curve, where the accelerating phase is followed by a phase of deceleration. This might happen because of the physical limits, economic costs, or the monstrous energy demands of the planetary scale computation required to run the AGIs — or all of three simultaneously.

Joi Ito, the Director of the MIT Media Lab, believes that the notion of the Singularity is fundamentally a flawed one because of its reductionist view of both the world and computing. “Whether you are on an S-curve or a bell curve, the beginning of the slope looks a lot like an exponential curve. … An exponential curve to systems dynamics people shows self-reinforcement, i.e., a positive feedback curve without limits. Most people outside the singularity bubble believe in S-curves, namely that nature adapts and self-regulates and that even pandemics will run their course.”⁴

The second problem with AGI lies in the complicated concept of intelligence, or how the AIs among us can understand and act in, with, and for the world. “What mainstream media call Artificial Intelligence is a folkloristic way to refer to neural networks for pattern recognition. The ‘intelligence’ of neural networks is, therefore, just a statistical inference of the correlations of a training dataset.” explains Matteo Pasquinelli, Professor in Media Theory at the Karlsruhe University of Arts and Design.

Pasquinelli continues that “the complex statistical induction that is performed by neural networks gets close to a form of weak abduction, where new categories and ideas loom on the horizon, but it appears invention and creativity are far from being fully automated. The invention of new rules (an acceptable definition of intelligence) is not just a matter of generalization of a specific rule (as in the case of induction and weak abduction) but of breaking through semiotic planes that were not connected or conceivable beforehand, as in scientific discoveries or the creation of metaphors (strong abduction).”⁵

The collective comet fever can still induce us to suspend our disbelief into this planetary scale AI event. The easy clickbait narrative of the Singularity imply an AI comet is about to collide on Earth, and we are left to believe that this otherworldly intelligence will give us a smooth ride. As another option, we can of course acknowledge the impossibility to model all the complexities of the world and the messiness of human life, and stop presuming that an intrinsically ethical AGI will just arise from the deep layers of computation.

As Joi Ito laments in Resisting Reduction: A Manifesto: “For Singularity to have a positive outcome requires a belief that, given enough power, the system will somehow figure out how to regulate itself. The final outcome would be so complex that while we humans couldn’t understand it now, “it” would understand and “solve” itself.”⁶ Doesn’t seem that plausible, does it?

Ashes for Three Monitor Workstation, 2017. Sondra Perry. Photo: Kevin Kline, Squeaky Wheel Film & Media Art Center.

4. Nothing is neutral

To design genuinely benevolent AIs requires that we admit our responsibility, and accept that AIs are never neutral, but always imbued by our values and ideologies. This means that truly, “we’re inside of what we make, and it’s inside of us,”⁷ as Donna Haraway acutely observed in her Cyborg Manifesto already in 1985.

James Bridle, artist and writer exploring the reverberations of the digital and networked world, echoes this sentiment in his book New Dark Age: “Technology does not emerge from a vacuum. Rather, it is the reification of a particular set of beliefs and desires: the congruent, if unconscious dispositions of its creators. In any moment it is assembled from a toolkit of ideas and fantasies developed over generations, through evolution and culture, pedagogy and debate, endlessly entangled and enfolded.”⁸

Hidden in plain sight, encoded in the fabric of our societies, live these human biases that implicitly and inevitably get designed into the code and behaviors of AI. This is what Nora Khan charts in her critique of the anthropomorphized “intelligent-seeming systems” created by the Silicon Valley computer culture, and unfolds how “The AI from this culture is slippery, a masterful mimic. It pretends to be neutral, without any embedded values. It perfectly embodies a false transparency, neutrality, and openness. The ideological framework and moral biases that are embedded are hidden behind a very convincing veneer of neutrality.”⁹

These cultural dependencies are explicated point-blank in the work of artist Sondra Perry. She examines how race and body politics are tragically mechanized into the code and back to the world. In Graft and Ash for a Three Monitor Workstation (2016) her 3D scanned self-portrait avatar explains how she was “rendered to her fullest ability”, but the code fails to “replicate her fatness in the software that was used to make us”.

A “neutral body” is always a reflection of dominant cultural standards, and the design of these programs create a computational reality of what is desirable, and what is not. “Frequently, the algorithm’s decisions are not mysterious or beautiful, but quite banal, familiar, a tool and platform for reifying inequity. That AI can now make decisions of horrifying and binding effect (who gets a job, a house, insurance), potentially locking people deeper into structural inequity, it seems imperative we question the pure, mathematical objectivity claimed by engineers.”¹⁰ writes Khan.

The really deep problem that we have to admit, is that bias really is a feature of these systems, not a bug. This is what Benjamin Bratton describes as a perceptual, historical and ideological apophenia — our human tendency to perceive and create connections and meaning between unrelated things — that invariably causes correlational biases in all designed systems.

These realizations explain why ethical design principles need to arise from a profound critical dialogue and an updated design discourse, never from an illusion of neutrality nor just the idea of reducing bias from AI. As Bratton states, “We need not only to be careful not to code our biases back to the AI systems, but rather try to uncode and decode them from these systems.”¹¹

5. The comet is us

Clarifying the hopeful trajectories for benevolent AIs — and defining the strategic bets and tactical steps to make those a reality — demands that we escape from the prosaic hype and the folk design philosophies that dominate the contemporary AI design discourse.

First, we need to admit the sinister characteristics of our culture and technology, and our own role in the cases that have lead us to the dark side, even if inadvertently. There “is a decade long misrepresentation of the relationships and responsibilities of designers to their users, a dangerous lack of professionalism, ethics and self-regulation, along with a lack of understanding of how multi-disciplinary design is leveraged to both exploit and harm the community,”¹² writes technologist Cade Diehm in his urgent essay On Weaponised Design.

Second, we need new guiding principles and revised worldviews across the design and engineering community to guide our understanding of AI. Bratton asserts how “our conception of design needs to move away from anthropocentric and androcentric philosophies of technology.” This means we cannot just try to humanize AI and aim for increased morality and added oversight, but we need to challenge the assumptions that lie behind our decisions. “Instead of mirroring human biases, for AI to be more generally beneficial for us and everyone else, we may instead wish it to be less human-like, not more,”¹³ Bratton continues.

Third, to succeed, design and engineering need to engage with even broader spectrum of alternate point of views and constituents, beyond the contemporary technology and design discourse. This should include art, design, and technology criticism; feminism; sociology; post-humanism; political philosophy; and contemporary cybernetics, among several other important fields of inquiry. If we will not expand our awareness of the world, we fail to understand how AI is likely to shape it, now and in future.

So it seems evident that the imminent computational comet that we eagerly expect to arrive, is not that artificial or does not come from the outside. The comet with the deepest impact is already here — it is us, our own intelligence, and collective capacity to design benevolent intelligent machines into our societies.

This essay was written for the what we write when we write about design program of new forms of design criticism. It is organized by the Finnish Cultural Institute in New York and funded by the Finnish Cultural Foundation.

FOOTNOTES

  1. Hornung, Helmut. Cultural history of comets. Max-Planck-Gesellschaft, 2013. https://www.mpg.de/research/comets-cultural-history
  2. Marcus, Gary. Deep Learning: A Critical Appraisal, 2017. https://arxiv.org/pdf/1801.00631.pdf
  3. Bratton, Benjamin. Remarks on the Hole of Representation in Computer ‘Vision’. Lecture on European Graduate School, 2017. https://www.youtube.com/watch?v=VSAaGcPsim0&t=1361s
  4. Ito, Joi. Resisting Reduction: A Manifesto. Designing Our Complex Future With Machines. Journal of Design and Science, MIT, 2018. https://jods.mitpress.mit.edu/pub/resisting-reduction
  5. Pasquinelli, Matteo. Machines that Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference. Glass-Bead, 2017. http://www.glass-bead.org/article/machines-that-morph-logic/
  6. Ito, Joi. Ibid.
  7. Haraway, Donna. A Cyborg Manifesto, 1985.
  8. Bridle, James. New Dark Age — Technology and the End of the Future. Verso, 2018.
  9. Khan, Nora. I Need It To Forgive Me. Glass Bead, 2018. http://www.glass-bead.org/article/i-need-it-to-forgive-to-me/
  10. Khan, Nora. No Safe Mode. Flash Art Online, 2017. https://www.flashartonline.com/article/sondra-perry/
  11. Bratton, Benjamin. Ibid.
  12. Cade Diehm. On Weaponised Design. Tactical Tech, 2018. https://ourdataourselves.tacticaltech.org/posts/30-on-weaponised-design/
  13. Bratton, Benjamin. Ibid.

Source: Deep Learning on Medium