AI and Its Impact on Human Evolution
Discussions on Artificial Intelligence (AI), and Machine Learning (ML) have been everywhere over the past decade or so. However, much of these are based upon either inappropriate or inadequate understanding of the technology.
As a remedy to this, and to contribute to the ongoing discussions, this article explores, in-depth, the various aspects of Artificial Intelligence. In doing so, first, we would attempt a thorough understanding of the technology enabling AI. Then, we would focus on how AI has and will impact the course of human evolution.
What is Artificial Intelligence (AI)?
John Macarthy, the father of AI, defined it as, “The science and engineering of making intelligent machines, especially intelligent computer programs.”
To adequately grasp what AI is, it is imperative to understand ‘intelligence’ first. Derived from Latin terms intelligentia and intellēctus, it is the ability to perceive, comprehend, or understand something.
By extension, it also includes logical coherence, rationality, autonomy in action, and situational adaptability. Moreover, emotional, spatial, and kinesthetic awareness are also characteristics of intelligent beings.
AI is a branch of computer science which deals with the structure, function, and development of processes that are artificially simulated to perform tasks involving human intelligence.
Broadly put, these tasks involve voice or facial recognition, decision making, translating, and so on.
The Levels of AI
Based on the maturity of ‘intelligent’ processes, AI may be classified into at least three different levels — Narrow or Weak AI, General AI, and Superintelligence.
Narrow or Weak AI
Presently, this is the most basic level of AI. As the name suggests, intelligent systems at this level of AI are trained to perform only one, or a few, specific functions.
Owing to its limited learning abilities, Narrow AI can only learn within the periphery of its specific, pre-defined use case. In other words, AI at this level isn’t dynamic enough to adapt to situations beyond the scope of the initial programming.
For instance, a Narrow AI bot programmed to translate from, say, English to French, cannot automatically learn English to Spanish translations and would require human intervention and further programming.
That said, the following are some of the most common use cases of Narrow AI.
1. Natural Language Recognition and Processing
2. Self-driven vehicles
3. Facial or Voice Recognition
4. Spam detection
At this intermediary stage, the cognitive abilities of an intelligent system are at par with those of humans. However, General AI is still a theoretical possibility.
To have General AI, a process must have specific defining characteristics. These, in turn, may be summarized as follows. Here, for a better understanding, it is worthwhile to recollect the characteristics of intelligent beings from the previous section.
1. Learning — Adapting to changes based on experience.
2. Memory — Comprehending, retaining, and retrieving knowledge from experiences
3. Reasoning and Abstraction — Deriving logical conclusions, performing generalization, and deducing rules based on the available data.
4. Problem Solving — Finding systematic solutions to encountered problems.
5. Divergent Thinking — Deriving multiple solutions for a single problem.
6. Convergent Thinking — Eliminating possible solutions to determine the best response.
7. Emotional Intelligence — Recognising and interpreting human emotions.
8. Speed — Performing the above functions, and responding in real-time.
At its most mature stage, AI is expected to surpass the cognitive abilities of an average human. One of the theoretical justifications of this state lies in the Law of Accelerating Returns, propounded by Ray Kurzweil.
It asserts that the rate of progress of a learning environment evolves exponentially, and so do the use cases while moving from one phase to the next. Given this, it is expected that the transition from General AI to Superintelligence would be rather smooth, and swift.
The fact remains, however, that a section of researchers regards this stage as a threat to the existence of humankind, for all the obvious reasons. For others, this stage would mean abundance and equity.
At this point, it is worthwhile to mention that the discussions that follow are focused primarily on Narrow AI, it’s tools, and so on.
Machine Learning — The Cornerstone of AI
Having discussed its forms (levels), we are now in a position to focus on its functioning. Machine Learning is a critical factor in enabling AI.
In itself, Machine Learning is a subset of AI that presents a set of tools and methods, which enables a process to learn by observation, from data and ‘experiences.’ It is a statistical approach that involves probabilities to determine outcomes and is modeled on the available training data.
Based on the degree and nature of training data involved, there may be different methods of Machine Learning.
Supervised Machine Learning
In this, we fully equip the algorithm with labeled training data. That is, it already has the solutions to specific problems, based on which it interprets incoming data.
The algorithm constructs a model through multiple iterations and validations, while the developer adjusts it slightly for each iteration. The process is repeated until the model performs precisely as per the training data.
Supervised ML, again, is of two types.
1. Classification — The algorithm predicts categorical responses to given data.
2. Regression — The algorithm predicts a numerical, continuous response.
Unsupervised Machine Learning
At times, when it is not feasible to provide labeled training data, the algorithm has to find its own ways of segregating data. Usually, there are three approaches to this.
1. Clustered — Creates clusters of similar data points.
2. Anomaly Identification — Identifies anomalies in the given data.
3. Association — Associates data based on their relationships and dependencies.
Semi-Supervised Machine Learning
In this mixed-method, a part of the data is labeled by a human. General Adversarial Network or GAN, is an excellent example of this method. Herein, a generator network uses labeled data to produce new data, while a discriminator network determines the validity of the data, in relation to the training protocol.
Neural Networks and Deep Learning
As the name suggests, this machine learning method involves artificial simulations of the human neural system. Neural networks are one of the fundamental elements of most ML algorithms. Usually, a typical neural network has two-three layers/inputs/outputs.
When a neural network has more than three layers, it is referred to as a Deep Neural Network (DNN), and the method is known as Deep Learning. Apart from DNNs, deep learning algorithms also include Deep Belief Network (DBN) in which certain layers are connected unidirectionally and are known as Restricted Boltzmann Machines.
With the ability to work with 1000 or more layers, Deep Learning is, by far, the most effective of the many machine learning methods.
Factors Promoting Artificial Intelligence
Having discussed the fundamental structural and functional units of the technology enabling AI, let us now look at the factors which propelled its massive boom in recent times. The developments discussed herein are themselves enabled by progress on a number of interlinked fronts.
Substantiating Moore’s Law, computational speeds have doubled over time, while their prices have been halved. Moreover, advanced Graphics Processing Units (GPU) machines are now highly adept at computing vectorized numerical functions. These, in turn, are the key elements of all machine learning algorithms.
One of the significant outcomes of the internet boom is the massive increase in data production. According to Forbes, the world was producing 2.5 quintillions of data, every day, back in 2018. With the proposed onset of 5G technologies, as well as the rapidly growing mobile device industry, this figure is only expected to rise with time.
The more the data, the better AI algorithms can be trained.
The rising popularity of AI is not merely hype but is backed by substantial research and innovations in this field. Among other things, the frameworks developed by tech giants such as Google’s TensorFlow, Facebook’s PyTorch, and Microsoft’s Azure have significantly contributed to the rapid growth of AI.
With that, we’re in a position to halt our discussion on the technological, structural, and functional aspects of AI. Now, let us focus on the next important thing — the impact of AI in human evolution.
The Evolution of Humans and AI
The first significant leap in the evolution of humans was, arguably, the discovery of fire and agricultural methods. Then, more than a thousand years later, we had the internet which, again, transformed the way we think and do things. Primarily, it offered a range of tools that enabled us to perform actions that weren’t otherwise possible.
Now, with the advent and developments in AI, we are experiencing a similar, transformative phase along the trajectory of human evolution. Even in its present nascent stage, AI has opened promising avenues and is transforming almost every industry, including health, finance, transportation, logistics and so on. With further development, AI will only become more efficient itself, but will also enable the evolution of advanced humans. And, as of now, there seem to be at least two ways for this to happen.
Neuralink, as envisioned by Ellon Musk, is striving to develop brain-chips, which will enable us to control technology with our thoughts. If actualized, this would mean that tools and technology would cease to be external entities for humans. Instead, they would become an integral part of us.
At its logical best, this will even allow communicating with one another only by thinking. That is, without speech, or any other form of expression for that matter. In other words, we would be able to transmit our thoughts, just as we can send emails today.
In its present stage, the Bluetooth-enabled Neuralink device can interact with mobile devices such as iPhones.
Biotechnology has, for long, been performing research in Genome Editing, using technology such as CRISPR-Cas9. With AI, the prospects of such techniques could be taken to an altogether different level.
Among other things, genome manipulation can help increase our cognitive abilities. In this context, mature AI can be trained to run simulations of genetic structures that enhance cognition. In turn, this has the potential to enable the discovery of mutations for the development of the human brain.
The Road Ahead
At present, these may apparently seem to be mere theoretical possibilities. Yet, even then, their promise is undeniable. Moreover, the rate at which we are developing these technologies, it is quite likely that we create more mature forms of AI in the near future.
And, when it happens, it will inevitably question our readiness to adapt to the exponential changes that the use of AI facilitates. With AI’s evolution, we, the creators, will have to evolve as well, if we are to keep pace with our creation. It is only then that we will be able to reap the optimal benefits of this promising technology and ensure a better life for ourselves.
The domain of AI and its relationship with humans is rife with moral and ethical dilemmas. Such paranoia is somewhat naive, especially if we truly grasp the enabling technologies, as well as their implications.
Read More Blogs: