Multi-Layer Perceptron (MLP)

Source: Deep Learning on Medium

How to learn?

Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron.

We can represent the degree of error in an output node j in the nth data point (training example) by

where d is the target value and y is the value produced by the perceptron.The node weights can then be adjusted based on corrections that minimize the error in the entire output, given by

Using gradient descent, the change in each weight is

where y_{i} is the output of the previous neuron and \eta is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations.

The derivative to be calculated depends on the induced local field v_{j}, which itself varies. It is easy to prove that for an output node this derivative can be simplified to

where \phi^\prime is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is

This depends on the change in weights of the kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.