Recreating Plasticity in Deep Neural Networks

Source: Deep Learning on Medium

Recreating Plasticity in Deep Neural Networks

During this holiday season I am revisiting some of the most important AI papers of the last few months.

The human brain remains the biggest inspiration for the field of artificial intelligence(AI). Neuroscience-inspired methods are regularly looking to recreate some of the mechanisms of the human brain in AI models. Among those mechanisms, plasticity holds the key of many of the learning processes in the human brain? Can we recreate plasticity in AI agents?

Synaptic plasticity is one of those magical abilities of the human brain that has puzzled neuroscientists for decades. From the neuroscience standpoint, synaptic plasticity refers to the ability of connections between neurons (or synapses) to strengthen or weaken over time based on the brain activity. Very often, synaptic plasticity is associated with the famous Hebb’s rule: “Neurons that fire together wire together” which tries to summarize how the brain can form connections that last a long time based on its regular use on different cognitive tasks. Not surprisingly, synaptic plasticity is considered a fundamental building block of aspect such as long-term learning and memory.

In the artificial intelligence(AI) space, researchers have long tried to build mechanisms that simulate synaptic plasticity to improve the learning of neural networks. Recently, a team from Uber’s AI Lab published a research paper that proposes a meta-learning method called Differentiable Plasticity which imitates some of the synaptic plasticity dynamics to create neural networks that can learn from experience after their initial training.

The quest to simulate synaptic plasticity in AI models is nothing new but its relatively novel in the deep learning space. Typically, plasticity mechanisms have been constrained to the domain of evolutionary algorithms and specifically backpropagation techniques. The creative idea of Uber’s Differentiable Plasticity relies on extending backpropagation models with traditional gradient descent techniques to assign variable weights to connections between neurons in a computation graph.

Differentiable Plasticity Explained

The main idea behind Differentiable Plasticity is to assign each connection in a deep neural network an initial weight, as well as a coefficient that determines how plastic the connection is. A connection between any two neurons i and j has both a fixed component and a plastic component. The fixed part is just traditional connection weight wi,j . The plastic part is stored in a Hebbian trace Hebbi,j , which varies during a lifetime according to ongoing inputs and outputs. The algorithm evaluates the plasticity of a connection using a coefficient αi,j. Thus, at any time, the total, effective weight of the connection between neurons i and j is the sum of the baseline (fixed) weight wi,j , plus the Hebbian trace Hebbi,j multiplied by the plasticity coefficient αi,j .

As part of the training period, the Differentiable Plasticity model uses gradient descent to tune the structural parameters wi,j and αi,j, which determine how large the fixed and plastic components are. As a result, after this initial training, the agent can learn automatically from ongoing experience because the plastic component of each connection is adequately shaped by neural activity to store information. That model closely resembles the synaptic plasticity mechanisms in the human brain.

Differentiable Plasticity in Action

The Uber research team released an initial implementation of the Differentiable Plasticity model in GitHub. The code includes a series of experiments that evaluate the new meta-learning technique across several well-known scenarios. For instance, in an image reconstruction task, a deep neural network memorizes a set of natural images that it has never seen before; then one of these images is shown, but with one half of it erased, and the network must reconstruct the missing half from memory. This task has been notoriously challenging to solve by traditional networks with non-plastic connections (including state-of-the-art recurrent architectures such as LSTMs) but Differentiable Plasticity models seem to cruise through it.

Another test used to evaluate plastic network was the maze exploration challenge which has been considered one of the standard tests for reinforcement learning algorithms. In that task, an agent must discover, memorize, and repeatedly reach the location of a reward within a maze. During the test, the models using Differentiable Plasticity vastly outperformed traditional reinforcement learning models as shown in the following figure. As you can see, the model using Differentiable Plasticity(left) is able to navigate the maze way more efficiently than traditional reinforcement learning model(right).