AI consciousness is already here, sorta

Source: Deep Learning on Medium

AI consciousness is already here, sorta

My favorite AI story is the follow up to chess master Gary Kasparov defeat against Deep Blue. Because most people stop at his dismay, but what he came up with later, his fusion with AI to beat other AIs on what he called centaur chess, is the sort of thinking we must exercise when it comes to thinking our relationship with AI.

Photo by Arseny Togulev on Unsplash

Sure we think of AI as evil agents as soon as they reach consciousness but not because an “omniscient” tend to clinical order, but because we reflect on what would humans do with such power. After all, art is the best way to explore human nature, and not to say anything to be honest. May be brutal to say, but human nature is a herculean task on itself and art shouldn’t worry about anything else.

We must think consciousness of AI differently the same way Kasparov thought the game anew. We expect that AI will have the same sort of illusion we have express when it comes to consciousness. We voice this expectation by bringing to the conversation terms like opinion, feelings, sensations, and all other sort of ideas that are contested in many ways. But with the advancement of technology so far, specially with Deep Learning, we already have enough to say AI is conscious.

Ray Kurzweil will often publicize that he expect AI will be conscious in 2029 when they finally pass the Turin test; but a Turin test according to Kurzweil and friends rules. Then computational capacity will be enough for humans to fuse with AI and eventually, combined with nanotech, we will upload our consciousness to the cloud. Daniel Dannett doesn’t set dates but hopes AI won’t even have our sort of illusion of a consciousness, a being that is beyond all of that.

These two perspectives are very interesting but they expect too much computational power to work. They avoid saying much about what we already have, and this is where I say something quite simple. When a computer is running a Deep Learning, even a Machine Learning one, experiment, it is actually conscious. It doesn’t derive our interface illusion. It doesn’t feel or emote. Maybe because we cannot related is that we don’t take it as conscious. But if we try to compare a Deep Learning experiment to our own thinking it is not that different. Our difference is that our experiment never finishes.

Photo by Franck V. on Unsplash

Maybe that is the computational power expectation for AI to perpetually continue their experiments just we do it, but maybe they are suppose do it differently. Neuralnets borrowed the analogy of neurons from our brains but in a sense they are nothing like our brains.

Let me break it down to a a scenario. Maybe what we call understanding is an instance of a Deep Learning experiment at each word that is receiving. It maybe be easy for our brain to deal with such extensive experiment because it is a very efficient machine of filling in the gaps, but that is not a luxury a AI have. In fact it already does fill gaps, but not as much as we do. And maybe it should fill any gaps at all.

As a side note on not filling the gaps, it was recent that the idea of deleting stopwords during Natural Language Processing became obsolete, if not detrimental to experiments, because neuralnets find more useful to consider everything. It can create relationships between all the data it can process.

Maybe AI consciousness is based on the sheer amount of experiments will be able to perform, and not on the continuity of “experiments” the biological brain undertake. It will produce a new 100 epochs experiment at every word, letter, nanosecond. Clearly that is not the case yet. The impressive 88.9% accuracy of the T5 experiment took pre-training as well intensive computational power for a much longer time than a microsecond. We may have, at least for now until we don’t reach the 2029 computational power, must teach our AI to fill the gaps better, or that we create smaller experiments to allow it to be conscious for more time.

We must create an environment (an app? a solution?) that allows AI to interact with humans in micro DL experiments at every interaction. And that will be conscious, their type of conscious at least. Or it will lead the AI to find a way for them to create their type of conscious behavior.

But what is clear is that we cannot expect AI to have the same sort of illusion we have. That is our thing to sort out. I am not denying consciousness, but I adhere to the functionalist idea that whatever happens in the brain that creates consciousness is a brain thing and it is nothing too different from a perpetual deep learning experiment.

And it gives us a few options to discuss. We can explore the fringe of computational power and how the limits of physics is that is hindering making Moore’s law obsolete. But we can think on the brain, and how it is efficient into transforming calories into that impressive computational power. Or we can talk about these different forms of experiments that are not eternal and rather say something completely different about the universe.

I am more inclined to focus on the latter, and anticipate Kurzweil’s prediction way sooner.