Machine Learning: A Brief High-Level Overview

Original article was published on Artificial Intelligence on Medium


When talking about machine learning, the approaches used are typically split into a few categories: Supervised, unsupervised, semi-supervised, and reinforcement. As the name implies, the supervised approach involves having a person directly attend to the program as it learns. Large amounts of labeled examples are fed into the system, and it will slowly begin to recognize more and more of the data before finally, it can reliably distinguish between the data it is given. Due to this, supervised learning sees it’s most effective use when the task requires sorting data into two categories, making a selection between multiple types of answers, or making predictions based on its data or the predictions of other machine learning programs.

Unsupervised learning, to contrast, has a somewhat different goal. As labels are not provided for it to associate with the data, it cannot apply labels to the data it identifies. There is no correct answer here, and as such, the program has more freedom to find and show relevant data and how it is connected. What ends up happening is the program will group data by similarities it finds, such as how AirBnB will group houses available to rent by their neighborhood. Because unsupervised learning excels at finding any similarities or abnormalities in data, it excels at grouping data together by those similarities, abnormality detection, and other tasks that involve similarity detection.

In addition to these two approaches, there is also a compromise between the two known as semi-supervised learning. Where supervised learning is provided a large amount of labeled data (such as the ~9 million images used by Google’s OpenImages) and unsupervised learning is provided a large amount of unlabeled data for the program to group by similarities, semi-supervised programs are provided a small amount of labeled data and a large amount of unlabeled data. This serves to partially train the algorithm on the labels, before allowing it to label the unlabeled data, which gives it a starting point for an understanding while allowing it to explore the data on it’s own and develop its understanding of it. Current limitations make this less effective than supervised learning for labeling data, but as the approach progresses if it reached a similar level of effectiveness it would severely cut down on the need for massive databases of labeled data. In its current state, semi-supervised learning approaches create algorithms that excel at labeling a large group of data based on a small group of identified data, ranging from anything as mundane as translating languages based on a less than complete dictionary, up to fraud detection.

Finally, there is reinforcement learning. Reinforcement learning heavily differs from the other three, as instead of providing a large amount of data to sift through and look for similarities, reinforcement learning is rooted more heavily in a trial and error approach, where the program will attempt an action. Throughout the process, either a data scientist supervising will provide positive and/or negative cues, or the program itself will provide rewards or punishments based on the outcome of its actions. Either way, the steps taken are largely up to the program so long as they are in line with the chosen goal. This approach sees the widest use in multi-step tasks will clear rules, with real-world examples ranging anywhere from video games to robotics. One such example is AlphaGo, the AI designed to play go, which in 2015 became the first program to defeat a professional human Go player.

There is one final learning approach that while not technically machine learning, is often included as a subset of machine learning. Deep learning relies on neural networks, a type of algorithm inspired by the way human brains work. The main difference in performance between the standard machine learning models and deep learning boils down to the way data is piped into the system. With machine learning models, data needs to be structured in some way for them to understand and interpret it, and after they have learned if they are still producing inaccurate results, there is a need for a person to step in and “teach” them, correcting the errors. On the other hand, with deep learning models, the system is able to make use of the artificial neural network in order to detect when their results are inaccurate and correct themself. This does not mean deep learning cannot be wrong, however, as even with it correcting its own mistakes, inaccuracies in the data can, as always, lead to inaccurate results. Deep learning has revolutionized the field of machine learning, leading to the most human-like AI currently existing, such as Google Duplex, one of very few AI able to claim they passed the Turing test.