AI Myth and Reality: Are AI and Machine Learning the Same Thing?

Source: Deep Learning on Medium

Author: Michael

Go to the profile of Michael Wang

This is the first of a multi-part article debunking some of the myths and misconceptions about artificial intelligence.

Modified; Original Source: Microsoft

Myth: AI, Machine Learning and Deep Learning are the Same Thing

When Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go, the terms AI and machine learning (and to a less degree, deep learning) were used in the media to describe how DeepMind won. People have put an equal sign between AI and machine learning. While they have all contributed to reach that milestone, they are not the same thing.

AlphaGo versus Lee Se-dol, Source: MIT

Reality: Long Story Short. No, They Aren’t.

Within AI, there is a large sub-field called machine learning (ML), which is defined as the field of study that enables machines to learn without being explicitly programmed. Machines learn via a process called “training” and do not require custom programming to solve problems. ML can be “orchestrated” to recognize patterns from data without explicit programming, and is usually good at solving one specific task (that is, “narrow AI”). ML requires a well-thought-out training data acquisition strategy. AI, on the other hand, is an umbrella term for a broad set of computer engineering techniques, ranging from ML and rule-based systems to optimization techniques and natural language processing (NLP).

To visualize the difference, we can think of AI, ML and deep learning’s relationship as a concentric circles with AI (the idea that came first) the largest, then machine learning (which blossomed later), and finally deep learning (which is driving today’s AI explosion)fitting inside both.

Source: Nvidia

Back in 1956 when AI pioneers started constructing complex machines, their concept of AI is what now we called “General AI.” General AI is an amazing machine that should possess all our senses (or more!), all our reason, and think just like we do. These machines, just like human, could come as friends (C-3PO?) and foes (Hello, Terminator!). However, General AI machines have remained in the movies and science fiction novels for good reason; we cannot pull it off, at least not yet.

What we do falls into the concept of “Narrow AI.” These narrow AIs are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like face recognition on Facebook. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, machine learning.

Machine learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.

Machine learning came directly from minds of the early AI crowd, and numerous algorithmic approaches were under experiment. Nothing sticks. As we know so far, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign. Sounds good, but hardly mind-blowing great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it.

There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error. This is where deep learning kicks in.

Another algorithmic approach from the early machine-learning crowd, artificial neural networks, came and mostly went over the decades. The AI research community has basically ignored this technique as it provides very little “intelligence.” The problem was even the most basic neural networks were very computationally intensive, it just wasn’t a practical approach until supercomputers with GPU that can perform parallel computing was available. Andrew Ng at Google made a breakthrough in 2012. He essentially make a huge neural networks by increasing the layers and the neurons, and then run massive amounts of data (10 million YouTube videos) through the system to train it. Ng put the “deep” in deep learning, which describes all the layers in these neural networks. Finally, image recognition by machines trained via deep learning in some scenarios is better than humans.

to be continued in part two