Original article was published by Catriona Campbell on Artificial Intelligence on Medium
The Kardashev Scale & AI: Likely Bedfellows
How can we use one astronomer’s classification of alien civilisations to inform our thinking around AI?
This week, I’m keen to dip into a little science-fiction — shock horror! As you may have ascertained from a few other articles to date, I enjoy watching, reading and writing about sci-fi. And what would my commitment to the genre be without a fascination with the great speculators of our future?
One I stumbled upon recently is Nikolai Kardashev, a Soviet and Russian astrophysicist and SETI (Search for Extra-Terrestrial Intelligence) pioneer who passed away just last year.
During his long and colourful career, Kardashev became known for many things, but one that stands out above the rest is his method of measuring and classifying an alien civilisation’s level of technological advancement based on the amount of energy it is capable of storing and using.
Known as the Kardashev Scale (KS), the method describes 3 levels, expanding through various orders of magnitude:
- Type I Civilisations (somewhat close to humans), also called planetary civilisations, which can capture and harness all available energy on their planet.
- Type II Civilisations, also called stellar civilisations, which can capture and harness all available energy within their planetary system and from the star around which it orbits.
- Type III Civilisations, also called galactic civilisations, which can capture and harness all available energy from their entire host galaxy.
Iwould be surprised if the levels of the KS don’t ring a bell for those with a keen interest in technology, and that’s because they’re not altogether unlike the broad categories of Artificial Intelligence (AI):
- Artificial Narrow Intelligence (ANI), also referred to as Narrow AI or Weak AI, which is the only type of AI that humans have managed to create to date. It is goal-oriented, designed to perform specific tasks, and merely simulates human behaviour based on a narrow range of parameters and contexts. This is the sort of AI that powers internet search engines; drones; chatbots; self-driving cars; voice assistants like Alexa, Siri and Cortana; and controversial facial recognition systems.
- Artificial General Intelligence (AGI), also referred to as Deep AI or Strong AI, which humans haven’t accomplished yet — there are varying predictions as to when this will happen, with some arguing it is a complete impossibility. If we do ever reach the stage of AGI, such tech would be capable of replicating human-level intelligence and behaviour, as well as solving any problem.
- Artificial Superintelligence (ASI), which is a theoretical form of AI that, if created, wouldn’t simply replicate human-level intelligence and behaviour, but instead develop self-awareness, continually self-improving to the point its intelligence extends far beyond our own in every way. This is the sort of AI we see in the imagined futures of sci-fi films like The Terminator and 2001: A Space Odyssey, with goals contrary to those of mankind and little choice but to screw us over.
So, as you know by now, while we haven’t quite become a Type I Civilisation just yet, humans have successfully achieved ANI — this is where the KS and broad categories of AI differ. Even so, the comparison is still an interesting one, don’t you think? After all, both are essentially thought experiments on what our species might be capable of at various unspecified points in the future.
Although, I’d be lying if I said the thought of spacefaring humans zipping between solar systems (perhaps including the one astronomers just caught on camera) isn’t more appealing than the thought of robots zapping us between the eyes.
If we put the two categorisations side by side, thinking about the enormous leaps between Type I, Type II and Type III Civilisations, it helps put the development of AI in perspective. In other words, it could be an incredibly long time before we move from ANI to AGI and then to ASI — if we ever do.
Of course, the leaps between Kardashev’s levels are likely considerably bigger than those between the stages of AI evolution, the former potentially involving thousands or millions of years. But perhaps the KS can still teach us something about long-term thinking around AI.
I’ve said this before, and I’ll say it again: I do believe that, if AI researchers and scientists are left to push on uncontrolled, their creations will one day outsmart humans. And I completely agree with Elon Musk’s recent statement that those who refuse to accept the possibility are:
This is why, in my upcoming book on AI, I make a case for planning for all eventualities — even if those eventualities await us hundreds of years down the line.
What I mean is truly pragmatic planning, involving realistic precautions taken through processes of governance prohibiting the development and use of AI whose goals aren’t inconsistent with our own — unlike the robots of sci-fi futures. The European Union is currently heading down this path.
What I don’t mean is the sort of radical planning Elon Musk is tied up in right now, which would make space-farers of us all. The SpaceX founder and CEO is going full-throttle with his grand scheme to colonise Mars for humanity’s prospective escape from Earth — you know, in the event of an ASI takeover.
The irony here is plentiful. Such a strategy wouldn’t be necessary with more down-to-earth measures in place. But if SpaceX does continue with its plan, eventually succeeding in its goals, this will take us one step closer to becoming a Type II Civilisation.
And if we leave Earth for our dusty, red neighbour, not because we fancy an interplanetary holiday, but because we’re responsible for the creation of hostile AI, then it would be human stupidity and arrogance facilitating progress along the Kardashev Scale.
But hey ho. Much of what we face today on this planet is down to human stupidity and arrogance — so, no surprise there, huh?
We may live to see the shift from ANI to AGI, but it’s highly improbable we’ll ever be around for the transition from Type I to Type II Civilisation — especially keeping in mind suggestions AGI or ASI will only arrive once we achieve Type II status.