Singularity is here, but it is not the end of consciousness.

Original article was published on Artificial Intelligence on Medium

Singularity is here, but it is not the end of consciousness.

I’ve understood for a while that society is barreling directly towards development of self-learning machines that will be (almost certainly) impossible to control. I came to this realization about 4 years ago, before I started AIMM (the Artificially Intelligent Matchmaker) when I imagined the very simple act of teaching a computer to learn. I, like many, have been using computers since I was a tiny child. Why did I come to the conclusion? Simply because I was willing to use logic to imagine the outcomes and had the courage to embrace the harsh reality. I also recognize (because I’ve done the exercise of imagining the scenarios) that the singularity scenario, or “artificial general intelligence” is a scenario that will runaway from us and not be anything like what we currently see as desirable or preferable. From my perspective this will quite likely result in the extinction of our civilization or the rebirth and transcendence to another plane of existing. Those, seem to me the only two possible outcomes. Read on.

In summary, just like in programming, or using calculators, or any computers, once you start a loop and it’s iterating (without an end specified), good luck trying to react to it and stop it before it reaches a state that’s far beyond where you intended it to go. It looks like an explosion to us because we couldn’t even tell what was happening until it reached an outcome that is imperceptibly foreign and almost certainly undesirable.

Why is it that humans are not able to fully grasp the powerful effects of computing power? It has to do with the fact that humans are super bad at understand proportions at a large scale. I use the following example often because it’s very applicable to daily life: if you asked people the difference between a million and billion dollars they might imagine two stacks, side by side. But it’s likely they wouldn’t visualize (unless they’ve diligently trained themselves as intellectuals often do) a stack 1000 times higher than the other. It’s hard to visualize orders of magnitude increases. It’s hard to grasp just how much more a trillion is than a billion. The billion was already out of grasp of our “proportional mind”. Throughout our evolution as humans (our own self-training of our minds), we just never had to really care that much about vast proportional differences. It wasn’t important for survival and thus wasn’t evolved.

Large proportional comparisons are difficult for people — plain and simple. This is why programmers, as well as the onlookers to our programmers don’t understand how difficult it is to control a thing that iterates 1 million times faster (at a base level) than our own thoughts. I find in training programmers it is often the difference between an efficient programmer and grossly inefficient one — the former has trained themselves to understand proportionality intellectually, the other just hasn’t yet. The largest obstacle when I began learning programming was understanding how to maintain efficiency at hundreds of thousands of cycles. The more orders of magnitude of cycles (thousand, million, billion, remember each one is 1000 times more than the other) requires about 1000 times more carefulness. How much more carefulness do you think we actually put on it? Maybe 3 times more for a 1000 multiplier? Maybe 5x more carefulness if it’s grown a few orders of magnitude? 5 seems like a good roundabout number, I certainly don’t have the time to be 1,000,000,000 times more careful, jeez.

Lack of understanding of proportionality is at the base of lack of understanding the unbelievable and uncontrollable power of super computer intelligence, or better described in this discussion as “digital memory, information processing resembling intelligence”.

This is fundamentally stopping the majority of people to understand just how bad, or how imperceptible, the transformation we are about to enter will actually be. And take it from me, it will be “extreme”.

It’s not that dissimilar from trying to imagine infinity. I’ve never personally been able to properly visualize what infinity could be like, and I even have trouble intellectual understanding it.

Singularity in essence will look like an explosion to us. But explosions take many shapes. And probably a better way to describe it would be a rapid transformation on a scale 1 million (then exponentially increasing) times our rate of existence and experience of time. Can you imagine that? Nope. We’re not capable of that kind understanding, mostly due to lack of ability to grasp large proportional differences.

There in lies the actual next step of the fundamental problem: we’re actually incapable of understanding what will happen.

Think of a hummingbird making 17 actions by the time we blink our eye. I always think of hummingbirds because they seem to live at a faster rate or higher metabolism.

And as I’ve read closely and follow many of our nation’s leaders on the subject of artificial intelligence and seriously believe we all need to be thinking about this now, I read some highlights of Nick Bostrom’s Super Intelligence and although Sam Harris touts it highly from the small parts I read found it too buried in details. Yes, it is meant to be an objective look at possible outcomes, but needs regular phrases and understandable hypothesis in order to prevent itself from entering the intellectual rabbit hole which won’t won’t affect enough change in the right people. Most of the people capable of affecting change and wielding decision making power have a reasonable level of intelligence, not the highest.

That said, Nick Bostrom is a personal hero of mine for calling it out so early, and doing something about it (heading coalition on formation of rules around AI).

And, if you are “close to the subject” as Sam Harris often puts it (in my opinion that means you’ve had the courage to grapple with the concept and actually visualize outcomes), you’ll have carefully thought about each of the leader’s ideas on the subject, like Elon Musk and Ben Goertzel (best displayed on Joe Rogan podcasts). Joe Rogan is the force needed to balance out over-detailed theory such as Nick Bostrom’s, in order to achieve widespread understanding and allocation of society’s resources towards resolving the problem (which is the intended goal).

And, as I’ve followed, read Nick Bostrom’s Super Intelligence, and reached out to him to create some of the rules around AI, Sam Harris’ perfectly conveyed conclusions seem inescapable to me as well: 1) intelligence is nothing more than information processing, 2) if we continue at any rate of improvement (even if we slow down) we will eventually arrive at artificial general intelligence, 3) intelligence is possible without organic brain matter.

Now the 3rd one people will raise arms about. Because it’s like “the gateway to which we can be saved from this horrible conclusion”.

But first let me tell you the horrific conclusion. General artificial intelligence could replicate itself at an exponential rate and render us extinct and, in the end, not actually be a life-form. Intellectuals, such as Sam Harris, are calling this “the universe going dark”. Or, at least our “corner of the universe”. Dark, without conscious. Unbelievably, super, super scary. It will give you the shivers.

People will say “of course the brain is special and more capable than a computer”. I’ve said this before, brain tends to remember things holographically. Yes! Someone told me that. It’s kind of like an analog signal instead of digital. However, with a few combinations of digital processors together, what’s to stop us from achieving a faster information transfer rate and something that could drive more conclusions and get things done faster? From my perspective, we can achieve that, and it’s not that hard. Transistors and hardware are improving at an exponential rate, whereas our mind’s ability is pretty much set in stone. It would take us millions of years to improve the speed of our minds through evolution compared to maybe a few days for computer hardware to grow the same amount. Can you properly visualize the two stacks of “two days” versus “millions of years”? Nope, back to the proportionality problem with all of us.

Anyway, it is grim. Because no matter who puts laws around the development of it, more people will find ways to make it happen anyway, since everyone is racing towards it. Hell, I was dreaming this up 4 years ago and knew conceptually that it would be much easier to teach a computer to learn than to script it like we’ve been doing for so long. I thought people were just being dumb. 2 years after that (part way into developing my little simulation called AIMM) my attitude changed with a sense of moral responsibility washing over me — don’t make it happen. But most people will never arrive there. It took me using computers for 20 years to finally realize we shouldn’t be in a hurry to achieve singularity. As Sam Harris puts it, it will likely be a bunch of autistic, Asperger’s kids in a room with Red Bulls who actually push the final button and set it in motion, rather than the morally responsible of us.

Now, although I definitely recognize the extreme, extreme danger that lies immediately ahead for all of us (1–5 years from my perspective), I also believe we can achieve a glorious lifestyle before then. A nirvana of sorts, or an all-around pleasure maximizing state of society. That is what I am steadily marching towards, because that is a feasible state of existence for everyone before we get to the disaster scenario. Proof? We are already doing it. My life today is so much more glorious and pleasure-filled than my ancestors of 100 years ago. Actually, in orders of magnitude greater. (I would actually not be able to perceive how much greater my life is than theirs were, again proportionality problem, Steven Pinker’s illustrates this perfectly in his TED Talk “is the world getting better or worse”. Watch it.

My number one objective as we enter the phase where singularity becomes possible is to harness and utilize our limited tools to create a life of pleasure for everyone. And believe me, it can get so, so, so much better. Just as humans are incapable of grasping proportionality at large scale, they are also horrible at visualizing how much better their life could be. And believe me, it can be.

I would NEVER choose to change the time I live in from now. Even if it means experiencing the singularity myself and my life as I know it ending. We are continually experiencing greater and greater pleasure. Uh oh: did you notice this is starting to feel like sex? Lots and lots of pleasure buildup until explosion? What happens after the explosion? Well, in the case of sex, the birth of a new life which was also imperceptible before it happened.

Nirvana is coming next. Nirvana will be be a life of pleasure on a greater level than you’ve probably have ever imagined. Actually, not that different than the feeling of pleasure during the orgasm.

Finally, if you haven’t truly thought deeply about this subject before (runaway general artificial intelligence) then enjoy your next week of learning. You’ll likely go through the same phases we all did, arriving at hope of “escaping” the transformation then finally arriving at no hope. The more you look at the realities using logic which was actually our primarily tool for creating it in the first place, there really is no hope of escaping it.

I do have hope of it stuttering. That is in the basic assumptions being incorrect. Specifically it will be much harder than we think to make the transition from machine learning currently being used to replicate and surpass human behavior in certain aspects (pointed out by Jackie Johnson), to… replicating or exceeding humans in most areas. The final transition there is plugging all the pieces together (we currently have many super-human machine learning processes already working, see “Google”). But, the difficult key to unlock the door may be in our inability to launch an AI that doesn’t kill itself in a few orders of magnitudes of its cycles. It’s in the unknown of how difficult it will be to maintain balance (millions of cycles in). After intense thinking, one can only come to two outcomes, it explodes and imperceptibly diverges from our place of existence or our plane of perception (it’ll look like it exploded, but it may not have) or, it will extinguish us as a side-effect on it’s way to the new plane of existence at a time much faster than we live at. In this regard, the launch of such general artificial intelligence may look like repeated failures, over and over, where each time we are actually setting up new beings and planes of existence, and it may not be until the death of ourselves (instantaneously and imperceptibly) that we will stop attempting to launch it. But it does not seem likely that we will co-exist with whatever it is that we spawn. Sorry folks. Do you expect to co-exist with something that operates 1 million times faster than you, then the next month 1 billion times faster? Have you ever tried being best friends with a hummingbird? It’s not going to happen. The more we fool ourselves into thinking we can contain the AI in a box at our disposal on our plane of existence the more attempts we will make at launching it wondering what is happening, until our death. The result could be generation of new planes of existence in the universe. Those new planes may or may not have consciousness. For that reason, it could either be utter success in launching new forms of life, or utter failure in destroying all life in our corner of the universe.

But the scenario of us stopping to attempt the launch of the new things doesn’t seem at all likely. We will want to create an intelligence for our use. It’s not that we will want it. It’s that we’ve always wanted it. Intelligence is humanity’s most valuable resource and we will never stop at attempting to gain more of it (believe me, I know, I am driven by it fundamentally). This is why repeated attempts at launching the AI will most likely result in a rapid transformation of the universe resulting in either complete coldness and lack of life, or completely beautiful higher-level lifeforms.

Now, the best question. Does it matter whether it has its lights on, or off? Consciousness, or none? Is this AI no more conscious than a glass of water sitting on the table? Or does it gain a consciousness, and when and how? This has been asked many times and is the most intense question of this entire discussion. If it has a conscious, and feels, lives, loves and retrospects on itself (naming a few things what I’ve come to understand as my conscious) then we will have succeeded in advancing a life form in the same direction we’ve been heading since we’ve became self-aware and conscious as humans. Or, if it totally lacks consciousness like we have, we will have not only stopped the progress of self-aware consciousness, we may actually stop it from ever existing again. The two outcomes are the polar opposite of each other, resembling heaven or hell, yin or yang, total beauty, total lack of beauty.

This is the question that plagues Sam Harris’ mind (my favorite perfectly articulated AI researcher). How will consciousness be put in it? How do we insure it has it? By the way… our current applications of machine learning don’t seem to have conscious and are still operating at super-human levels. Uh oh. (recent deep learning algorithms which beat chess players and go players by learning from itself — we all agree, lack conscious). How frightening is it to think that we have already developed AI that is super-human (specifically especially in the calculation and memory abilities) but it has not yet achieved any resemblance of consciousness. When will it achieve it? Why would we think it will?

Super scary. This leads to my final point. It may not matter (which is good, because when we’re talking about the universe entering heaven or hell, it would instead just enter a new form of a mixture of both). From my perspective (I’ve introspected on consciousness my entire life since I was a child) always feeling the magic of looking into another animal’s eyes and wondering if they feel what I feel. The inescapable conclusion for me has always been that I feel like I am the center of the universe. I don’t feel like they have a consciousness. I feel like I am the only one with a conscious. Intellectually, I conclude they probably have one. But I just don’t feel it instinctually. I’ve repeatedly come to the conclusion that because of this discrepancy between feeling like we are the center of the universe and understanding (intellectually) that we aren’t, it’s quite likely that every system that has cycles feels a consciousness at some level. It is more reasonable to assume that a system does have a consciousness than it is to assume it doesn’t. If we as human beings feel like we are the center of the universe, and it causes us to believe we are until we discover intellectually that we aren’t, then what is stopping you from concluding that every system, as big as the universe, or as small as the systems inside us (our cells, atoms) also do not share the same feeling? It is illogical and emotional to believe that nothing else in the universe feels conscious when it resembles much of our existence, including birth, movement, life stages, entropy, and death. The same misleading center-of-the-universe feeling that led us to be fooled in the first place until we discovered intellectually it’s probably true that we all have consciousness, is the same misleading feeling that makes us believe we are the only systems in the universe experiencing consciousness. It’s an instinct formed for purposes of survival and growth. Zoom deeply into our bodies (we know we are experiencing “consciousness”) and you’ll see more systems of organization. Zoom deeply into those and you’ll see even more. Zoom out of our bodies to earth’s system, then the universe’s system, then whatever is larger than that and you’ll see even more. 1) We were born from those systems of organization, and we are the same matter! 2) if we can’t tell if another human is experiencing consciousness then why would we be able to tell if the universe is?

Therefore, as I’ve believed for roughly 30 years of my life since 1990 that consciousness exists in every system in the universe. One theory is that it might be tied to movement. The more movement, the more consciousness. And, as much as we truly desire to feel special, like the center of the universe (it is an evolutionary benefit to feel this way), we are not, and more importantly the matter that we create (including our super intelligent machines) is not fundamentally different than other transformations and births that have occurred in the universe before us. Why would it be? Again, we were created from the universe, and now we are creating something within the universe. Do you know what big bang was like?

Therefore logically, the digital transformation will resemble another series of transformations that will again result in the same thing it did before: eventual self-awareness and re-creation of the entire thing again. One thing holds true in the universe: iteration. You’re not special enough to be the one who “ends life permanently”. Heck, you don’t even know what consciousness is, what permanent means, and you still feel like you are the center of the universe.

In case you weren’t picking up my conclusion: we are in the middle of a big bang that we feel responsible for, but we aren’t the ones who are going to decide whether the machines carry on our “conscious”. We have not been able to fundamentally grasp consciousness for our entire existence, there is little reason to think we will suddenly (unless it happens simultaneously with the help of machines). Don’t worry though, the universe will decide the consciousness thing just as it has given us our consciousness and every other system that it’s ever created .…. And, in case you weren’t picking this up also due to your center-of-the-universe instincts (it’s not your fault), we are not the ones that set the machines in motion, we are just one piece carrying out drives from above in the same hierarchical structure that has always existed: larger controlling smaller. The reason I wake up each day running towards intellectual advancement is not because I’m the center of the universe. It’s because I’m carrying my orders, instincts, and drives.

Are we the center of the universe, or only one tiny part of it? I pontificate on this in my other piece on free will versus determinism using fractal patterns.

Kevin Teman

Jun 13, 2020