Eagles and Algos: Model-Agnostic Meta Learning

Source: Artificial Intelligence on Medium

Eagles and Algos: Model-Agnostic Meta Learning

This is part 2 of a six part series. Links to the other articles in this series can be found at the bottom of this page.

Golf can be a challenging sport to learn, let alone master. Early forays are riddled with embarrassing swings and misses, followed by the humbling realization that you perhaps could use some coaching. These coaching sessions usually come with a barrage of seemingly contradictory instructions: keep your head down, square your shoulders, tilt your knees, be firm in your grip but loosen your wrist, follow through…and keep your head down! All these directions just to hit a stationary object sitting on a perfectly manicured patch of grass. You start off by barely making contact with the ball. But after hundreds of practice shots, you begin to consistently ping the ball over 150 yards.

Source: eagleriver.org

This evolution from clueless golf enthusiast to respectable amateur is analogous to how a decision making entity, also known as an agent, acquires new skills in machine learning. Data (coaching instructions) is fed to an agent (golfer) to train it. The agent uses an algorithm (brain and muscles) to make sense of the data. This training process generates a model (muscle memory) which is subsequently utilized to complete a specific task (hitting a golf ball into a hole).

It is important to point out that the analogy between human and machine learning is an overtly simplistic abstraction of what is otherwise a complex subject area, with aspects that sometimes offer no human parallels. Also, a number of terms used in this article are rather nuanced, with meanings that change depending on context. A glossary has been provided to define them as used in the context of this article. These definitions should NOT be assumed to be universal.

Teachable Machines

Since the advent of language, it has been the cornerstone of instruction. It provides a reliable connective tissue between teacher and student. However, it has its limitations. Even for a fluent speaker, there are certain things that are difficult to communicate through language. For example, let us imagine a golfer’s first class. The instructor not only explains the intricacies of an ideal golf swing but also complements her instructions with several demonstrations. From listening to and observing the instructor, the student is then able to imitate her movements with increasing degrees of precision over time.

Source: dice.com

A similar conundrum exists in machine learning but the workarounds are not as straightforward. Natural Language Processing, the medium through which computers synthesize human language, has its unique syntactic shortcomings. These include an inability to capture visual context. For example, if I say to my iPhone virtual assistant: “hey Siri, what is that?”, I will get a meaningless response from Siri like “you got me”. The reason being the pronoun “that” could refer to anything and without visual input, Siri is stumped. Unlike humans, who have an inborn intuition to augment their comprehension with imitation (or demonstration), machine agents do not innately possess this level of cognition. Instead they must be taught how to learn continuously.

Meta-Learning

The process of learning how to learn is known as meta-learning. The primary objective of this field is to use an algorithm to train an agent on a specific task, such that it can perform a variety of related but new tasks on a specific model. This is achieved by constantly expanding the algorithm’s key parameters to enable it handle unfamiliarity and complexity. Researchers such as Jurgen Schmidhuber, Adam Santoro, Sergey Bartunov and Matthew Botvinick have done some amazing foundational work in this field.

Let us return to your golf-learning journey class for a moment. After acquiring your new motor skills over several classes, you are now ready to play a full 18-hole course. The instructor teaches you how and when to use the 12 different clubs in your bag(Irons, Woods, Wedges and Putter). Each time the ball lands in an unfamiliar scenario, you must learn which club is ideal for advancing it. Soon enough, you become pretty adept at navigating a full course. The specific body shape you assume for a particular type of shot, coupled with the motion your hands go through, is the model. The clubs, along with their use cases, are the parameters in meta-learning; and you must constantly update your knowledge of these parameters in order to keep the model competent.

Model-Agnostic Meta-Learning

Now imagine if the instructor limited you to use only 4 clubs (1 of each type), every time you played 18 holes. Sure, it is easier to carry 4 clubs around the course but it also requires more manipulation for every shot since you no longer have access to specialized tools for each shot. You also have to assume a different type of body shape and motion (new model). In order to do this, you must learn to adapt your limited club selection to a greater array of shots by tweaking variables such as your grip, angle of attack and force of impact. These adjustments are collectively known as model-agnostic meta-learning.

In the paper, Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Chelsea Finn, Pieter Abbeel and Sergey Levine present an algorithm for achieving meta-learning, irrespective of the underlying model architecture. Building on the work of Schmidhuber and co, their research focuses on deep neural networks as a preferred model, while also demonstrating the algorithm’s amenability to different model architectures and problem sets, including image classification and regression. In addition, the algorithm bypasses the introduction of new parameters by simply fine-tuning the existing ones to achieve optimal results. The end result is a model that enables an agent to learn on its own to solve new tasks, with minimal training input.