The Connected Mind

Original article can be found here (source): Artificial Intelligence on Medium

Human augmentation as a pathway to a benevolent future

Source: Nature Biotechnology

The implications of artificial intelligence (AI) are being fiercely debated by technologists, economists, politicians, and philosophers alike. AI now threatens workers whose jobs had previously seemed impossible to automate, from financial analysts and lawyers to journalists. How are societies going to react? And how will the primary purposes in life change over time?

Even if we do not necessarily agree on what the future of work has in store for us, one thing is for sure: its character and meaning will change considerably, offering new opportunities and perspectives. The below essay contains a proposition of how brain-interfacing technology could be harnessed to create a world of possibilities that, only a few decades ago, would have been impossible to fathom.

From innovation to augmentation

The advent of the Industrial Revolution in the late 1700s marked a turning point in human history: it set the stage for a fruitful and ever-growing relationship between humans and machines. The steam engine provided unprecedented geographical independence — factories could be built away from rivers, which were their primary power sources, and transportation — and thus exploration — became not only quicker, but also safer and less cumbersome. Over the next few decades, many more inventions followed and with them came an increasing sense of curiosity and a yearning for innovation, which, in the final third of the 19th century, sparked the technological revolution. Electricity, automobiles, and telegraphy provided additional degrees of freedom to people’s vocations and lifestyles. Contributing to this trend even further was the emergence of the digital revolution in the 1980s, which endowed us with the personal computer and the internet. Subsequent advancement of these technologies led to their frequent employment as extensions of human faculties, both in professional and leisurely practices. Digital products became freely accessible, information could be retrieved in an instant, and social communication became easier than ever. We had — perhaps somewhat unwittingly — departed into the era of human augmentation.

Are we dispensable?

Due to the incessant, exponential growth of technology, we are now on the cusp of a fourth industrial revolution: the integration of AI and robots into our workflow. Primitive robotic systems have already been implemented in a variety of industries to chiefly perform physically demanding, dangerous, or repetitive tasks [1]. Up next will be their employment in professions requiring higher-level cognitive skills — something that many believed was unique to us humans. This prospect, understandably, generates a certain degree of apprehension about job security and induces the notion that we will soon reach the end of work — a scenario outlined by the historian Yuval Noah Harari in his book Homo Deus [2]. While the odds of such a scenario materializing are certainly not zero, the careful and prudent development of augmentative technologies can steer us away from this rather dystopian path and enrich us with the qualities needed to prevail in a data-driven economy.

Augmentation and performance enhancement

Several domains of enhancing human faculties have been explored. These are, ordered by increasing likelihood of deployment in the global economy: (i) neurotechnology, (ii) nootropics, (iii) genetic modification, and (iv) brain-computer interfaces.

Neurotechnology is aiming to elucidate the mechanisms by which memories are stored and retrieved [3]. If successful, these mechanisms could be augmented with an implant to enhance these faculties. However, due to our still rudimentary understanding of memory encoding, together with the high degree of invasiveness required, this approach is deemed the most speculative of all.

Nootropics, or smart drugs, are chemical compounds capable of improving cognitive aptitudes [4]. Despite their reportedly frequent use by students and Silicon Valley executives, they have not been subjected to long-term safety studies and remain one of the most vehemently debated topics in neuroscience and medicine.

Gene editing has proven promising for eradicating congenital defects in human embryos; however, it remains unclear whether DNA can be safely altered in existing humans [5]. Moreover, due to the large size of our genome, the consequences of genetic modifications are extremely difficult to predict and the ethical challenges it poses are profound. It thus appears sensible to assume that this technique will not be used as a tool for human augmentation in the near future.

Brain-computer interfacing (BCI) exploits current knowledge of neuroscience and engineering to enable the voluntary, thought-based control of external devices [6]. The reason why this seems to be the most prudent and feasible path to human augmentation is because it does not tamper with brain function itself (and therefore, it can be argued, retains our human characteristics) and also demonstrates a fast rate of progress. Below, I propose the potential integration of non-invasive BCI technology into our economy and provide a brief account of the doors it could open for us. Furthermore, I will discuss — without diving too deep into a world of science fiction — the applications as well as societal considerations of BCI as an augmentative technology.

Non-invasive BCI: potential, limitations, and incorporation of AI

Recent advances in brain imaging and recording techniques allowed for the extraction of brain signals containing the commands for task performance. Within the various modalities of brain signal acquisition, one distinguishes between invasive, semi-invasive, and non-invasive techniques [7]. Invasive and semi-invasive BCIs entail the implantation of recording electrodes into brain tissue and onto the brain surface, respectively. This requires the opening of the skull and can only be performed with the pretext of disease treatment through surgical intervention. Non-invasive methods overcome this issue, enabling a speedier progress in augmentation research. Particularly prominent techniques are electroencephalography (EEG), which directly records electrical activity from the brain, and functional magnetic resonance imaging (fMRI) — an indirect measurement of brain activity through the detection of oxygenated blood delivered to metabolically active neurons.

While still in its infancy and mostly restrained to the inside of laboratories, non-invasive BCI has proven increasingly pertinent for interfacing with the brain. The capture and processing of brain signals allows for their translation into commands, which can then be fed into an external mechanical or electrical output device to perform a desired action [8] (Fig. 1). The ultimate aim is to provide control of such output devices using our thoughts (brain signals) alone. Likewise, information from such devices could be fed back into the brain to adjust and refine its control signals.

Pipeline of brain-computer interfacing technology.

This idea has been implemented in various studies, chiefly to restore communication in people suffering from locked-in syndrome, stroke, or neurodegenerative diseases. In fact, most applications have been within clinical settings; however, the wider use of BCI for human augmentation is obvious and intermittent non-clinical studies have started to emerge. In 2008, researchers from the University of Washington demonstrated the control of a humanoid robot using a visually evoked EEG response [9]. The robot transmitted images of encountered objects back to the user via its cameras, the user then selected a particular object by merely attending to the image, and the robot picked it up. Moreover, a 2016 study carried out at the University of Minnesota demonstrated the EEG-based control of a robot arm to perform reach-and-grasp tasks [10].

Such studies highlight the augmentative potential of BCI; however, there are still limitations hindering its commercial launch. A major limitation of EEG and fMRI is the production of excessive noise. Machine learning, a subdivision of AI, is used to filter out this noise to obtain purer brain signals. However, this process is very inefficient and therefore advances in machine learning are a prerequisite for the progression of BCI technology. Another limitation is the relatively low information transfer rate of BCIs, currently at 1.0 bits/sec [11], which is about a million times slower than an average internet connection. These limitations will need to be overcome in order to push BCI technology out of its infancy and to incorporate it into our economy.

BCI applications: beyond the climax of human potential

In the medical field, BCI technology has already started to transition from the laboratory into the real world with extremely positive outcomes. Coming next will be the transformation of a multitude of other industries. Below is a curated list of sectors (each including brief practical examples) that would benefit from BCI-based applications:

  • Law: The accuracy of jury verdicts could be increased by using fMRI to decipher brain activity patterns of the defendant. The mere presentation of evidence (i.e. a murder weapon) could reveal whether the accused recognizes it or not.
  • Transport: All domains of transport would benefit from a BCI that monitors brain-derived fatigue levels of the person in control of a vehicle, thereby significantly reducing accident-related fatalities. Furthermore, similar BCIs could be applied in other professions that require high mental alertness to ensure security.
  • Education: Using BCIs to monitor students’ degree of attention, engagement, and cognitive load could lead to a significant increase in learning rate. These measures could help teachers to tailor their academic content to the level of the particular group they are teaching, thus maximizing the learning outcome.
  • Astronautics: Astronauts would greatly benefit from physical amplifications through brain-controlled exoskeletons. For instance, dangerous tasks such as space walks could be performed by robots that are being controlled through the astronauts within the spacecraft. Similarly, such exoskeletons could also benefit terrestrials whose occupations are physically strenuous or pose safety risks.
  • Entertainment: Gaming and entertainment could be transformed by using interfaces that augment traditional ones like joysticks. This would not only speed up the game, but also allow for more actions to be performed simultaneously, leading to a more vivid user experience.
  • Arts: An additional modality could be added to the field of arts by using BCI as a vehicle of creation.

Safety considerations

The potential use of BCI-controlled technologies, as with any other form of technology, also raises a series of safety concerns. Could these technologies be hacked? Could someone control my augmentative tools to use them to their advantage, or to my disadvantage? We will need to be proactive in BCI safety research to warrant infallibility and security before this technology transitions into our everyday lives. Furthermore, lawmakers will need to pass sufficiently nuanced legislation on the use of BCIs and determine liability in the case of malfunctioning.

The transformation

Human evolution is attuned to linear progression. However, technological progress violates this linearity. It has advanced exponentially since the digital revolution in the 1980s and this trend is likely to continue. This means that, while seemingly far away, BCI technology will be ready for commercial use sooner than we think. This, of course, prompts the question: are we ready for it? At this point, the answer still appears to be quite elusive. Mankind seems to always demonstrate a curious, almost innate, antagonism toward novel technologies; yet, after some time passes, the majority grows rather eager to embrace it. It therefore seems plausible that, perhaps after some initial objections, we will happily integrate BCI into our day-to-day basis not only as a means of staying economically relevant, but also to keep pushing ourselves beyond our limits — a core feature we have been expressing since the Industrial Revolution.

Consider for a minute the new things we could accomplish if we didn’t have the limitations of biology? What if our thoughts weren’t limited by the two or three items we can hold in our working memory at once? What if we could express ourselves without the physical barrier of articulation? We have been demonstrating our will to move forward as a species through centuries of progress across a wide range of disciplines. Now we are at the dawn of achieving a level of potential that, only 50 years ago, would have been unfathomable. Through this unceasing drive for improvement, we brought this fantastic technology to life. Perhaps the time has come for technology to return the favor.

References

[1] D. Ichbiah, Robots: From Science Fiction to Technological Revolution, vol. 1, New York: Harry N. Abrams, 2005.

[2] Y. N. Harari, Homo Deus: A Brief History of Tomorrow, vol. 1, London: Vintage Publishing, 2016.

[3] J. Giordano, Neurotechnology: Premises, Potential, and Problems, Boca Raton: Taylor & Francis Group LLC, 2012.

[4] J. Siva, Nootropics and Smart Drugs, New York: BookBaby, 2012.

[5] T. Cathomen, M. Hirsch and M. Porteus, Genome Editing: The Next Step in Gene Therapy, London: Springer, 2016.

[6] J. Wolpaw and E. W. Wolpaw, Brain-Computer Interfaces: Principles and Practice, Oxford: Oxford University Press, 2012.

[7] R. N. Rao, Brain-Computer Interfacing: An Introduction, Cambridge: Cambridge University Press, 2013.

[8] J. J. Shih, D. J. Krusienski and J. R. Wolpaw, “Brain-Computer Interfaces in Medicine,” Mayo Clinic Proceedings, vol. 87, no. 3, p. 268–279, 2012.

[9] C. J. Bell, P. Shenoy, R. Chalodhorn and R. N. Rao, “Control of a humanoid robot by a noninvasive brain–computer interface in humans,” Journal of Neural Engineering, vol. 5, no. 2, p. 214, 2008.

[10] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter and B. He, “Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks,” Nature, vol. 6, pp. 1–15, 2016.

[11] P. Yuan, X. Gao, B. Allison and Y. Wang, “A study of the existing problems of estimating the information transfer rate in online brain–computer interfaces,” Journal of Neural Engineering, vol. 10, no. 2, p. 6014, 2013.