What is it like to be intelligent?

Original article was published on Artificial Intelligence on Medium


One major benefit of the IIT model is that it seems to explain the unconscious nature of the cerebellum, which has the most neurons of any brain section, yet has no correlated brain activity with conscious experiences. That’s that baseball catching part of your brain. Given you have 600 muscles in your body that fire up to 100 per second, it makes sense that a lot of bandwidth is required to learn and reproduce motor patterns. Think of a pianist in the climax of an energetic symphony. The pianist thinks in abstract representations like notes and timing, not finger movements. So the cerebellum is powerful, yet lacks the high degree of connectedness seen in the conscious neocortex. It’s lo-phi, pun intended.

Experimentation will have to show us how much of IIT can be attributed to intelligence, or function, over consciousness alone. Can you truly perform complicated tasks that require intelligence without any integration? Perhaps, this may be a clue that consciousness comes along for free when you get to higher levels of intelligence?

What does all this say about A.I.?

If information is the whole and only game in town, then you would think neural networks are relevant to consider. After all, they are rudimentary representations of how the neurons of our own brains work, as far as we know, that is. The great irony of A.I. is that we don’t know those artificial neurons work, either. Really. It works, spectacularly well, in fact, but we’re not sure exactly how or why. There seems to be some magical capability attributed to a network of simple arithmetic operations with non-linearity repeated thousands of times. The artificial brains don’t learn as broadly or as quickly as a human brain, but they still learn a lot. Just by showing lots of samples labeled with possible outputs, even a small neural network can learn very intricate relationships from input to output. Sometimes beyond human-level, even.

Currently, the major limitation of progress is around generalization. That means that if you learn one thing, you don’t need to learn every example to be effective in the real world. Babies don’t need to see all types of dogs to recognize a dog, but computers do. Humans can learn to play any type of game, from tennis to chess, but computers struggle to learn many things. They can be superhuman in one category, but the best we can do right now is learning across different Atari games or going from Go to Chess. That’s it. No tennis. No poetry. What’s missing?

Surprisingly few people are actively working on this problem. Most of the commercially viable uses of A.I. don’t require anything remotely like human intelligence. In fact, if you just need to control valves in a chemical plant, it’s better to not do a bit of sudoku or jazz composition on the side. Focused algorithms are incredibly useful commercially.

One of the few people actively thinking about it is Jan LeCun, one of the early pioneers of modern machine learning techniques in image recognition. His proposal is that Reinforcement Learning is the right path. For context, that’s the type of algorithm behind AlphaZero, for example. It’s learning by carrot and stick. If you do good, you get a carrot. If you do bad, you get a stick. That alone left to do lots of learning can produce spectacular intelligence like world-beating chess algorithms. So that gets us pretty far. What then?

Well, LeCun suggests we need a world view. A framework to represent the real world out there. For example, when humans play chess we see the pieces and the board. They are objects with context and meaning. The algorithm doesn’t see any of this. No shape to the knight. No color to the board. It just sees numbers. Knight D4. Checkmate.

The computer can learn the rules, but nothing about the objects. There is no meaning to moving a piece on the board, that can be reused in moving an apple. You might even argue, that even though the computer can play chess, it doesn’t understand chess. At all. To be precise, the computer can learn a function approximation that happens to have a strong correlation with good moves in chess. It’s not chess. If there was no human carrot and stick, it would in fact learn nothing whatsoever. The computer brain must be spoon-fed. This must change. The computer must be given a framework for modeling the world, just like humans. We understand deeply the environment we live in. How to move, how to find food, how to climb stairs, how to open doors, how to ask directions, how to hunt for jobs, how to write emails, and so on.

What if we got there? What if a computer could do all that? An exact functional replica of a human, but made from silicon and copper wire?

But are the lights still going to be on?

This comes back to our question of substrate independence. If you cloned yourself, which is technically already possible, then is the clone conscious? Seems that would be automatically true, even if it isn’t your consciousness anymore. It’s a copy. How far can you take that, though? If you just scanned your brain at the molecular or even quantum level into a computer simulation, would that be you, another copy of you, or just a virtual zombie lacking real human consciousness? Is a function approximation of the conscious behaviors enough to pass the test?

How can we know? When Alan Turing invented the computer during WWII, he already thought of this. He saw the evolution from a room-sized calculator to human-level intelligence, that it was possible. He came up with the Turing Test, which states that a human interacting with an entity in another room, solely through means of writing, should be able to discern whether this other entity is human or not. If they can’t, the result should be that the entity possesses human-level intelligence. So, chatbots then?

Already, claims have been made about passing the Turing Test with relatively simple algorithms. The usual tactic is to try to steer the human in a direction the algorithm can provide memorized answers in a human manner. But this never lasts. Given more than a few minutes, it breaks down eventually. We’ve all been there with Siri. Gradually, that will be pushed out further, with longer and more meaningful conversations that can be carried out by chatbots. Even if, again, they understand nothing of language itself. Yet much like with AlphaZero, a conversation is just one aspect of human intelligence. A hermit could have a rich inner life without stating one word while producing incredible intellectual or artistic feats. So how do we test for “real” intelligence or even “real” consciousness?

Amusingly, science fiction may have given us the answer decades ago. Blade Runner introduced the Voigt Kampf test, which was used to identify human-like cyborgs from the human population. Especially, when those cyborgs went rogue and violent. The test itself is designed to provoke emotional responses indicative of that special flavor of quality so key to the human experience. Difficult ethical and moral questions would be asked while measuring the pupil’s response as an indicator of emotional response. Kind of like a lie-detector test for the soul.

There is a fundamental problem with any such test though. This was shown by John Searle in his Chinese Room Experiment. It shows that an A.I. system can behave exactly as an intelligent, conscious entity should, but actually be utterly and completely clueless as to the contents of the experience it is having or part of.

Back in the world of today, we can also look at IIT for a measure of consciousness. The higher the integration of information, or Phi, the higher the consciousness, according to the theory. By design, simple feed-forward neural networks have a Phi of exactly zero, and cannot be conscious. There is learning but no integration. Yet more complicated networks such as recurrent neural networks or reinforcement learning can be shown to have positive Phi. That seems to imply that AlphaZero is conscious. It is doing a lot of integration of information, in fact. As much as a bacteria or fly? This we cannot yet say due to open questions in applying IIT and calculating Phi, but it seems very unlikely we should have immediate ethical concerns about Deepmind’s ethical treatment of AlphaZero as a conscious entity.

A.I. researcher Joscha Bach has gone as far as to state that only simulations can be conscious, implying that physical systems cannot. Note that since the brain simulates the world and self, our consciousness would fall under this definition of simulation, while a rock would not. This would be an argument against panpsychism, whereby consciousness is everywhere in the universe, just to different degrees in all matter. AlphaZero does model the world but does not really model itself, so it might fall short here for now.

Couldn’t this just all be emergent phenomena?

Let’s go back to the only conscious beings we can be certain about, namely ourselves. We know about evolution. Couldn’t this all just be explained away as emergent phenomena from how our brains evolved? Ants may not have the capacity in neurons to have regrets about the-one-that-got-away, but we do. Maybe suffering and regret are what got us here. Our brains developed relatively rapidly once fire and weapons allowed us to congregate in larger social groups. If you learn to suffer, you can learn to avoid a broad set of behaviors that may lead to the ultimate suffering in death itself. Intelligence allows you to regret in hindsight, and plan into the future to avoid potential sources of regret. It’s all very nuanced, but the intricacy of human interaction is endless.

But then how in the world did we develop all this nuance? How can the brain produce such incredible deep complexity out of monkeys beating each other with sticks? If intelligence and consciousness are so closely linked in human evolution, then how does it work inside our brain? Well, sadly, we don’t know that either. Given how much we’ve done with our brains, from General Relativity to the Human Genome Project, and going to the moon, we seem to know incredibly little about the source of all this intellectual progress.

We certainly know the parts of the brain, what some of them are functionally related to, how they evolved from lesser species, and we can measure brain activity as electrical signals between parts of the brain. That’s it. We have no idea about how learning works, for example. There is no human learning algorithm, even though we can now teach neural networks to beat us in chess. Part of it, of course, is our limited moral capacity for experimentation. Hannibal Lecter wouldn’t get many research grants. The other is just the sheer complexity of the system. There are more neurons in one human brain than stars in our galaxy, and that’s around 100 billion. Each neuron is connected to another 10,000 neurons. It boggles the mind, pun intended.

One fascinating novel approach to unraveling the relationship between the inputs and outputs of the brain is the Thousand Brains Theory. According to this framework, we don’t have a brain. Instead, we have thousands of little brains, that are connected. These are the so-called cortical columns that are somewhat self-contained stacks of neurons, that include a special type of neuron called a grid cell that interprets spacial information. No one column ever does anything alone.

The idea is that you have hundreds or even thousands of these units receiving the same signal, like in the visual cortex. Then they all produce an output of some sort. They vote. Yes, vote. This is why you can know there is a floor underneath the rug, without having to see it. Your brain can use a variety of sensory and memory inputs to estimate three-dimensional objects, like mugs. That’s how you know exactly where to place your fingers to reach and hold the backside of the handle.

Comparison of traditional (A) and 1,000 Brains (B) models of perception. Source: Hawkins, Jeff et al., A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, Front. Neural Circuits, 11 January 2019, https://doi.org/10.3389/fncir.2018.00121

This model (B) is very different from the traditional model (A) that has also inspired current Deep Learning techniques. Deep Learning assumes a hierarchy of connected layers of neurons, each responsible for higher levels of detail resulting ultimately in a decision or recognition.

Maybe this is why babies are so excited about peek-a-boo, since their thousand brains haven’t yet acquired memories to say what happens behind the curtain of the hands, and they genuinely believe you disappeared behind your hands. This works for dogs, too. Maybe that’s why babies are clumsy. Their thousand brains are still practicing this choir performance of voting in real-time to produce the right actions.

This whole thing sounds like hogwash

Well, many leading scientists would agree. Consciousness has always been a bit of a fringe science between philosophy, psychology, physics, and neuroscience, without ever becoming a serious topic of study. Until now, perhaps triggered by the rapid evolution of artificial intelligence, and increasing ethical questions about the treatment of animals. We need answers to these questions, and the pressure is mounting.

Yet many, such as professor Sean Carroll would simply say there’s nothing special going on. Wallace has said there is no physics of digestion either, it’s just biology, end of the story. Yet even rather outlandish concepts like Panpsychism are gaining some ground, presenting consciousness more as a force of nature like gravity, than a mere biological function. In that scenario, even rocks are conscious, just… less so.

The future of consciousness

So what’s the end game here? It’s a whole lot of reading and terminology you’ve suffered through to get here. Let’s close out with something juicy.

Apparently, a young Elon Musk came to the conclusion that the meaning of life is to spread consciousness across the universe. This is why it is our prerogative to explore the moon, Mars, and beyond. To seed the universe with the gift of life and consciousness. The human panspermia. This is especially true if there are no aliens out there, as the Fermi Paradox seems to suggest. It’s up to us to be the champions of life in the universe.

So what plans does Elon have for our consciousnesses? Well, he wants to tinker with them a little. Like, introduce a tertiary layer. Meaning the internet. Possibly with A.I. too. Oh, there’s a catch. It involves needles. But don’t worry, a robot will insert thousands of micron-scale electrodes into your neocortex. Super carefully. Only an inch deep or so. Oh, also the batteries. We’re gonna need to take out a bit of skull. No big deal, you won’t miss it, honestly. Like a coin size bit. Basically the brain jack from The Matrix, but maybe under general anesthesia. Not something you want to be conscious during.

Hold absolutely still. The Neuralink electrode insertion robot. Source: An integrated brain-machine interface platform with thousands of channels, Elon Musk, Neuralink, bioRxiv 703801; doi: https://doi.org/10.1101/703801

Initially, like with most of Elon’s grand plans, the R&D will be funded through commercial use-cases. This is a great lesson for any entrepreneur. In this instance, that means curing brain conditions like Alzheimer’s, epilepsy, autism, or blindness. Yes, curing blindness is on the menu in 2020. These are all major societal problems and therefore billion-dollar opportunities. It will directly help millions of people with debilitating health conditions while funding the real vision of a tertiary layer of the brain for healthy people.

The USB-C plug for your brain. Production models are planned to be wireless. Source: An integrated brain-machine interface platform with thousands of channels, Elon Musk, Neuralink, bioRxiv 703801; doi: https://doi.org/10.1101/703801

So what’s Elon’s stance on the C-word? Elon would fall into the emergence camp. Everything from 0 and 1. No special sauce. If you can fix the TV by kicking it, you can fix the brain by zapping it through those electrodes. That’s it. Get your eyesight back, stop those seizures. But those electrodes can also download. Your thoughts, your memories, your emotions. Backed up to the cloud. First via USB, then wireless.

From there, he proposes maybe you can even restore. Just like your phone or laptop, when it goes whack. Restore from the latest backup. Good to go. Want to erase a few awkward memories from last night? Swipe right. For the small step for mankind from there to literal immortality, all you really need is a new biological substrate. Say, like a clone.

The trillion-dollar question is this: is it you though? I mean, maybe it strokes your ego if a clone picks up where you left off, but “you” would still be dead forever. The difference here is that between sleep and death. You feel like the same person after sleep, but death is a one-way ticket. If we can’t really tell the difference, does it matter to you personally? Obviously, the clone that wakes up would totally say they are you with all your memories intact. But is that the same you waking up in a new younger hotter body, or just a clone with your memories?

Sooner or later, we may all have to make our own judgment.