A Philosophical Grasping at Consciousness

Original article was published on Artificial Intelligence on Medium


A Philosophical Grasping at Consciousness

Consciousness is the most fundamental and elusive facet of the human existence. In conversation, we refer to consciousness as the property of our minds that produces the richness of the lived experience. Colloquially, we agree that consciousness is physically grounded in our brain — though many argue simply having a brain is not a sufficient condition for consciousness. At best, we have a fuzzy sense of the causal and definitional properties of consciousness: even though consciousness is an essential part of the human existence, we have little understanding of precisely what consciousness is and how it is produced.

I’ll roughly break down the problem of consciousness into two distinct, though not mutually exclusive, questions:

  1. How is consciousness produced? This is a neurological question about how the electrical signals and biological components of our 1.5kg brain produces the human experience.
  2. What are the properties of consciousness, or of being conscious? This is both a neurological and philosophical problem. Philosophers have long believed that precise, qualitative probing through the right formulation of the problem of consciousness can yield important insight.

In this article, I largely ignore the former neurological problem of discovering the mechanism behind consciousness. Instead, I discuss the latter philosophical problem of how to define and describe consciousness, which often abuts with the adjacent mind-body problem of philosophy (note that there is a modern, growing field of philosophy called neurophilosophy that existentially grounds the philosophy of consciousness in neuroscience). Here, I follow three philosophers’s attempts to formulate and solve the problem of consciousness using machines, bats, and qualia.

Searle: Understanding Consciousness through Machines

In 1980, Berkeley philosophy professor John Searle published Minds, Brains, and Programs. In Minds, Brains, and Programs, Searle explicitly attempts to answer the question: “could a machine think?” In doing so, Searle proposes an important property for consciousness, arguing that a necessary condition for a thinking being is intentionality.

Searle’s argument is roughly structured as follows:

Premise 1: Mental processes in the brain cause intentionality.

Premise 2: Running a computer program does not, in and of itself, result in intentionality.

Conclusion: “Any attempt to create intentionality artificially… would have to duplicate the causal powers of the human brain.”

To Searle, there seems to be something internal to the human brain that enables consciousness; the presence of exterior behavior, or software, of the brain is not enough to constitute a thinking being. In fact, Searle goes as far as to postulate that mental processes might be reliant on the physical properties, or hardware, of the brain. In doing so, Searle disputes several justifications in favor of strong AI — the idea that a computer can be a mind itself — and instills doubt in the dualist project of creating artificial intelligence without recreating the underlying chemical processes of the brain.

Most famously, in disputing Roger Schank’s work, Searle formulated the Chinese room thought experiment (Gedankenexperiment), which contributes a helpful narrative for conceptualizing consciousness (though there are interesting refutations to the experiment, which I encourage you to look into if of interest). In the thought experiment, a person who knows no Chinese is locked in a room. The person is given a set of rules that have a set of translations from English to Chinese based on the symbolic structure of the Chinese language. Then, the thought experiment supposes that the person is able to use these rules to successfully translate Chinese — and is able to translate so well so that a native speaker would believe that the phrases were written by another native speaker.

Searle claims that this type of translation is the same type of translation done by a computer, and no matter who does the translation, we can agree that the translator does not truly understand Chinese. A translator does not understand a story in Chinese in simply applying a symbolic translation algorithm since symbols are a syntactic vessel for sharing underlying ideas.

“As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding.” — Searle

It is important to note that for Searle, understanding is a binary: one understands, or one does not. He is not interested in the granularity of understanding, and it does not matter that the translator has, or is able to simulate, the firings of neurons in the brain. Further, the behavioral appearance of understanding is not convincing either. What matters to Searle is the causal properties of the brain, which produce what he calls intentional states. These intentional states are defined “as a certain mental content with conditions of satisfaction, a direction of fit, and the like.”

It is also important to underscore that Searle does not believe it is impossible for machines to have understanding. However, formal models (which, I may add, are what today’s modern artificial intelligence systems consist of) are not sufficient conditions for creating a machine that can think. This is because a computer simply simulates the syntax of information processing but neglects the semantics of information processing. In doing so, a computer does not produce a program, but the brain produces mental states and events.

We can interpret Searle’s stance on the problem of artificial intelligence to formulate several required properties of consciousness. These requirements are grounded in intentionality. Intentionality in turn requires an understanding of the semantics of information, and Searle hypothesizes that this understanding may be a consequence of the physical properties of the human brain. Thus, consciousness must be intentional and causal.

Nagel: Understanding Consciousness through Bats

“The fact that an organism has a conscious experience at all means, basically, that there is something it is like to be that organism.” — Nagel

In the 1974 piece What Is It Like to Be a Bat, Princeton philosopher Thomas Nagel is fundamentally concerned with the subjectiveness of experience — an aspect of consciousness he feels is neglected in the common reductionist accounts of mental states. As the title suggests, Nagel uses the experiences of bats to clearly delineate this important distinction of the subjective from the objective. In doing so, Nagel carefully underscores that consciousness cannot faithfully be explored by extrapolating our individual conscious experiences onto other beings.

Nagel recognizes that the phenomenological experience is inherently attached to a particular point of view. Fundamentally, one may not have access to understanding the other types of point of views of some conscious beings, which leaves the reducibility of experience in question. Bats exemplify one such point of view of which humans do not have access to understanding the phenomenological experience; bats have perceptual modalities, particularly sonar, that humans simply do not possess and therefore cannot understand the experience of having. When attempting to understand the experience of other conscious beings, humans are limited by their imagination — and our imagination is limited by our lived experience. So, we may allow ourselves to imagine what it would be like to have sonar, but our imagining will always be through the lens of our human experience.

Nagel’s article perhaps raises more questions that it answers. In What Is It Like to Be a Bat, Nagel does not offer a clear alternative to solely objective attempts at describing consciousness. He does not repudiate physicalism, nor does he view an objectivist account of the mind as wholly bad; he entertains the possibility of an objective phenomenology independent of imagination. Ultimately, Nagel contributes an important call to action to give further philosophical thought to the relationship of the objective and subjective in our conception of consciousness.

The relationship between the objective and subjective, Nagel claims, forms important groundwork for theories of the mind, and it may lead us to ask different questions to explore consciousness given our innately limited capacities. Nagel poses a possible suggestions for the reformulation of the question of consciousness, suggesting that one should really ask “how do experiences appear to me” rather than how experiences are like to a conscious being, such as a bat. In doing so, Nagel explicitly acknowledges the necessity of the subjective in understanding consciousness.

Jackson: Understanding Consciousness through Qualia

In his 1982 paper Epiphenomenal Qualia, Australian National University professor Frank Jackson argues for the existence of qualia as a fundamental part of consciousness. Qualia is a philosophical term that refers to the “what it’s like-ness” of something, such as what is it like to be a bat or to smell a rose. In advancing the existence of qualia as a self-ascribed “qualia freak,” Jackson rejects a physicalist explanation of consciousness.

“I think that there are certain features of bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes.” — Jackson

Jackson tries to prove his claim through what he coins as the Knowledge Argument. Like Searle, Jackson employs a few simple thought experiments.

First, Jackson asks us to consider imaging what it would be like to be someone named Fred, who is able to quickly and obviously distinguish between the shades of the color red. For Fred, the distinction between two shades of red, red 1 and red 2, is so obvious that its difference is analogous to how we understand the difference between purple and green. Jackson argues that even if we knew all of the physical information behind Fred’s special ability, we would not understand what it is like to be able to distinguish red 1 and red 2.

Jackson uses the Fred thought experiment to explicitly distinguish his argument from Nagel’s, stating that Fred’s case is one of ignorance about a property of his experience, not what it is like to have his experience. Jackson also uses Nagel to re-emphasize his refutation of physicalism: if physicalism was true, it “would obviate any need to extrapolate or to perform special feats of imagination or understanding in order to know about his special colour experience.”

More famously, Jackson poses another thought experiment about a scientist named Mary. Suppose that Mary lived in a black and white room who was only able to conduct experiments through a black and white television. Through the television, Mary becomes an expert in vision; Mary theoretically understands all there is to know about the human visual system. Then, Jackson supposes that when Mary is able to go into the real world, which is filled with colors that Mary has never seen before, Mary will have gained new knowledge; her investigation of the physics behind vision through her work in the black and white room still lacked some information that needed to be acquired through experience.

Both thought experiments seek to show that physical information does not contain all there is to know about the world. Through refuting physicalism, Jackson offers qualia as a way to conceptualize certain mental states that have no bearing on the physical world and contains additional, distinct information from the physical. Therefore, consciousness is not embodied by simply a physical account of its workings.

Conclusion: What is Consciousness?

Our philosophers claimed that consciousness consists of:

  • Intentionality (Searle),
  • Subjectivity (Nagel), and
  • Qualia (Jackson).