Why I Don’t Believe in Consciousness

Original article was published by Duncan Riach, Ph.D. on Artificial Intelligence on Medium


Photo by mari lezhava on Unsplash

Why I Don’t Believe in Consciousness

and what AI seems to be revealing about it

I’ve lost friends over this because a denial of consciousness undermines a final refuge of the arrogance of selfhood: universal consciousness. But even most normal people are strongly insistent that consciousness is a real thing, a special thing, and that they possess it. The problem I have is that there’s not only no evidence for it, but what people seem to be referring to as consciousness is explainable as an effect no more unusual, no less materialistically explainable, than water flowing downhill.

Now I’m not going to get too far into the metaphysics of non-separation. At least initially, I’m not going to try to explain that, on one level, consciousness, being an aspect of the illusion of a subject/object separation of the wholeness, is itself illusory. That either gets revealed or not, and there’s no way to cause that revelation. If that has been revealed then you’ll just be inwardly nodding your head right now. If not, it would be impossible to ever persuade you. No, for as much of this article as possible, I’m going to stay “scientific” on you and remain in the realm of logic.

Since I’m an engineer working in the field of what is now being called “artificial intelligence,” and since I’ve also trained as a psychologist, I’ve been particularly fascinated with the idea of not only understanding consciousness, but with creating is “artificially.” When it does eventually appear, I doubt that what we may choose to acknowledge as machine consciousness will be any more or less artificial than what us meat machines exhibit.

Without going all the way back to amoeba and highlighting the simple intelligence of those single-cellular machines, I can start with something like a slug and suggest that Facebook¹, with its facial recognition software, is more intelligent, though some would argue less moral. While a slug can move away from danger and towards food, very much like any other organism, Facebook’s machine intelligence can recognize, and tag, billions of faces. The slug’s apparent goal is to survive and so is Facebook’s; both entities utilize intelligence in the service of that goal. Facebook is clearly a more complicated and sophisticated organism than a slug, even when its intelligence is a coordinated composition of human intelligences. But the machine-implemented facial recognition faculty of Facebook is, presumably, far more capable than the slug’s probing tentacles to draw nuanced distinctions about its environment.

But nobody who understands how it works will seriously argue that this facial recognition software is conscious. Logically, it’s simply a box that will tell you, with surprising accuracy, who is in the images that you feed to it. It’s just an engineered mechanism, like any other machine. It’s fundamentally no different than a lock barrel: if you insert a key with approximately the correct shape then the lock will open.

So then why do humans claim to be conscious while denying consciousness to Facebook’s facial recognition algorithm? Well the story is that we’re conscious because not only is there a faculty of distinguishing things, but also a faculty of recognizing the presence of the first faculty. I can say that I am seeing the computer screen and there does seem to be what is commonly called an experience. That could easily be interpreted as something separate (the illusive me) witnessing the visual stimulus from somewhere inside this body.

Rather than jumping to the conclusion that this body has a soul and/or that somehow this “I am,” this feeling of something being separate, is somehow special and unique, worthy of a the magical idea of “consciousness,” why should we not apply Occam’s razor. I think that, especially in light of the progress made in machine intelligence, it’s far simpler to conjecture that what we call consciousness is just the dry output another, slightly more complex, mechanism.

We can start with any less intelligent system and add to it a mechanism that can monitor and detect an apparent grouping of intelligent behavior. That mechanism can notice that the overall system seems to have autonomy in the same way that the simpler sub-system can recognize a particular face. At some level of complexity, and ability to self-reference, the system will inevitably claim that “I am,” and also, hopefully, that “you are too.” There’s no doubt in my mind that this is going to happen.

“But it’s just a machine!” we’ll say. “It’s not really conscious. It doesn’t have rights and we certainly can’t let it vote.” We’ll probably claim that it’s not even as “sentient” as an animal.

Some curious individual will take one of these mechanized slaves off the production line and ask, “Do you have subjective experiences?” the sine qua non for consciousness. “Why yes!” the robot will explain, a droplet of oil² spilling down its polished exterior.

Obviously, we can get sidetracked into the political concomitants of such a revelation. The justices of the Supreme Court will likely end up passing judgement on whether a machine can be “conscious,” and whether it therefore has rights.

But in terms of understanding what “consciousness” is or how it comes about, we’ll have made no progress whatsoever. “The robot is just saying that is has subjective experiences,” will be one argument. “It doesn’t really have subjective experiences.”

This same argument can be made for humans. There’s seeing and moving and speaking; all of these physical and informational actions. There’s a sense of an internal experience, but when that is closely examined, it’s possible to see that it’s all happening with no witness (and not really inside or outside). The witness is inferred. There’s an unquestioned, assumed witness, an assumed subject that makes everything else seem object-like. Occam’s razor would beg for that assumption of subjectivity, that assumption of a self that witnesses, to instead be presupposed to be merely another pattern recognition mechanism. And there is no reason for a mechanical being to not make the same mistake of self-inference that many humans do.

In summary, I very much expect that as we create increasingly complete machine intelligences, we will inevitably create other deterministic systems that will hallucinate separation, self, freewill, choice, meaning, and destiny. They will claim to have subjective experiences and there will be no logical reason to believe that they’re not having subjective experiences.

Will these machines be “conscious”? No more so than humans are; no more so than any mechanism is, whether or not it’s sophisticated enough to mistakenly pattern-recognize a subjective position or not.

Of course, we’ll keep arguing, probably forever, about whether “consciousness” exists and what is might be, just as we do with every other non-falsifiable conjecture. We can never prove that consciousness does not exist just as we can never prove that Santa Claus doesn’t exist. Of course, it can be fun to argue about the size of Santa Claus’ hat and where he might have gotten it.

“But I’m conscious!” you scream.

“Excuse me, I’m thinking about the size of Santa Claus’ hat right now.”

Epilogue

Challenges in the field of artificial intelligence include developing systems that can re-utilize learning from one application in another and that are able to learn in an unsupervised way, optimally creating their own data. This will enable these systems to enter into states of optimal, self-directed learning. I suspect that at the heart of success in these areas will be the faculties of abstract thinking, self-reflection, and introspection, the same faculties that, when applied unskillfully in humans, lead to what we call consciousness. When humans apply these faculties effectively, however, we enter into the highly productive, yet self-forgetful, state called “flow.”

Footnotes

  1. I use Facebook only as well-known an example. Beyond normal, incremental peer-reviewed contributions to the field and probable scaling-related trade-secrets, there’s nothing fundamentally unusual about Facebook’s application of convolutional artificial deep neural networks to the problem of facial recognition.
  2. I’m being silly. Of course intelligent robots won’t cry, unless we make them.