Traditional AI vs. Modern AI.

Source: Deep Learning on Medium

Traditional AI vs. Modern AI.

The evolution of Artificial Intelligence and the new wave of “Future AI”

Today’s AI

Without any doubt, today’s biggest buzzword is Artificial Intelligence or AI. Most prominent research organizations, including Gartner, McKinsey, and PWC, have glorified the future of AI with mind-blowing statistics and future predictions. Here is the PWC’s report (2018), where it predicts that by 2030, AI will contribute $15.7 trillion to the global economy. The overall productivity increase will be 55%, and the GDP increase by 14%. The executive order could quickly demonstrate the importance of AI within the united states, as signed by President Donald J.Trump.

“Together, we can use the world’s most innovative technology to make our government work better for the American people.”
Michael Kratsios
Chief Technology Officer of the United States (
source)

We have numerous examples in our daily lives, where we leverage Artificial Intelligence without noticing. This includes google maps, smart replies in Gmail (2018+), facebook picture tagging (2015 approx), youtube/NetFlix video recommendations (2016+), etc. There are even a few astonishing news reports outlining its values and reach, like this one (2019), where Novak Djokovic used AI in Wimbledon’s final. Or look at this website (launched in 2019) with 100% fake pictures of people who look 100% real leveraging deep neural networks (Deep Learning). This list goes on and on …

Traditional AI (1950–2008)

The term “Artificial Intelligence” was coined in 1956, at a historic conference in Dartmouth. At that early stage of AI development, scientists and media hype made utopian claims around the possibilities of AI. Some scientists made it so clear that in the next 20 years, the machine will do everything that humans possibility could do.

“machines will be capable of doing any work a man can do.”

1965 — Herbert A.Simon

There were many ups and downs for AI since then, as AI did not deliver on all of its promises. In 1973 the British Government published a report named Lighthill report after an investigation and seized the funding for many major AI research universities. Prominent AI approaches back then were Expert Systems and Fuzzy Logic with Prolog and Lisp being the top choice as a programming language among C/C++, of course. The first significant breakthrough in Expert systems happened in the ’80s, and the first prominent expert system SID was introduced. Later again, there were missteps and issues with AI, followed by another breakthrough by IBM when its supercomputer Deep Blue defeated world champion Garry Kasparov in New York City in 1997. Since the concept of AI was then being seen as a failure, IBM claimed it was not using AI in Deep Blue, which made for some interesting discussions.

Please note that all the breakthroughs we see above in section “Today’s AI” happened in the last 8–10 years. The backpropagation algorithm that is the heart of Deep Learning/Neural Networks was first introduced in 1986. The question is “Why in the last 8–10 years? when AI has been around for more than 70 years ”. Let’s jump into the current era of “Modern AI.”

Modern AI (2008+)

The term data science was coined in early 2008 by two data team leads from Linkedin & Facebook. (DJ Patel & Jeff Hammerbacher). This new field in computer science introduced advanced analytics that leveraged Statistics, Probability, Linear Algebra & Multivariant Calculus. Later in late 2012, the real breakthrough happened in Artificial Intelligence when, in a historic ImageNet competition, a CNN based submission called AlexNet outclassed all other competitors and received a 10.8 % lower error rate than the runner up. That was the advent of modern AI and is believed to be the trigger for a new boom in the AI world. One primary reason for the win was the utilization of Graphical Processing Unit (GPU) for the training of the neural network architecture. Later in 2015, facebook’s leader of AI Yann LeCun pushed hard on deep learning and its possibilities, along with other “godfathers of AI.” Today multiple cloud vendors are offering cloud-based GPUs for “Modern AI,” whereas their adoption was never an option prior.

The GPU has literally changed the game with the switch from CPU to GPU. It has revolutionized technology and redefined computing power and parallel processing. AI requires high-speed computing power due to advanced mathematical calculations. And especially, there has been an exponential increase in the volume of data generated in the last ten years (source).

A future wave of AI

Google is kind enough to let employees allocate 20% of the time in their ambitions and fun projects. In 2015 a member of Google’s search filter team, Alexander Mordvintsev, developed a neural network program as a hobby that astounded his colleagues with dream-like hallucinogenic appearances. And this project was named by Google as a Deep Dream. This project was the result of experimentation while training a neural network and playing with the activation functions at a scale. But even today, one of the biggest mystery of AI is that we have no real understanding of how true AI makes its decisions internally or how neural networks reason to learn in back-prob. In layman terms, the actual reasoning of AI or bias towards making the decision is a mystery, and it is called a “BlackBox of AI.”

XAI

One of the new waves of Artificial Intelligence efforts is to break that BlackBox and get the logical explanation of the decision making processes. This new concept is now called as “Explainable Artificial Intelligence” or XAI. Once XAI is achieved, the AI community will have access to a new wave of AI; more robust and resilient AI frameworks will be possible, including a predictable understanding of the AI process and future growth patterns.

Small Data

Major AI breakthroughs are happening in the Deep Learning space, and in Deep Learning, the neural networks are super hungry for an enormous amount of data. For example, to train a model to recognize a cat, it is required to feed approximately 100 thousand cat/non-cat images to get the perfect classification of a cat that is approximately equal to the human eye. The other area of research that is picking up exponentially is to learn fast with fewer data sets and probabilistic frameworks. And this new concept is called “Small Data.” The research area is “How to train your machine learning models with smaller amounts of data and to get accurate predictions.” This is a giant opportunity in the AI space and is expected to explode with future opportunities for innovation.

Two other areas of future AI research is to get significant advancements in the field of “unsupervised learning” and reinforcement learning.” Where we can leverage available knowledge via transfer learning and generating artificially created sampled data with some reinforcement learning e.g via GAN network models.

Key takeaways

Traditional AI is 70 years old conceptionally, but it has picked up significant momentum recently in the last 8–10 years (Moden AI). Exponential growth in data, affordable computing power, and pay as you go models on the cloud were the real catalyst to these modern AI breakthroughs.

The future wave of AI is to break the “AI Blackbox” and to understand the reasoning of the decisions and predictions made by the Machine Learning model. The other major area of future AI wave is to learn from limited data sets or “Small Data.”.

About the Author

Awais Bajwa is an experienced Information Technology professional and an Artificial Intelligence enthusiast.