Building Neural Networks from Scratch in 9 Steps

Original article can be found here (source): Artificial Intelligence on Medium

5. Neural Net Construction

Photo by freestocks.org on Unsplash

We objectify a ‘layer’ using class in Python. Every layer (except the input layer) has a weight matrix W, a bias vector b, and an activation function. Each layer is appended to a list called neural_net. That list would then be a representation of your fully connected neural network.

Finally, we do a sanity check on the number of hyperparameters using the following formula, and by counting. The number of datums available should exceed the number of hyperparameters, otherwise it will definitely overfit.

N^l is number of hyperparameters at l-th layer, L is number of layers (excluding input layer)

6. Forward Propagation

We define a function for forward propagation given a certain set of weights and biases. The connection between layers is defined in matrix form as:

σ is element-wise activation function, superscript T means transpose of a matrix

Activation functions are defined one by one. ReLU is implemented as a → max(a,0), whereas sigmoid function should return a → 1/(1+e^(-a)), and its implementation is left as an exercise to the reader.

Photo by Holger Link on Unsplash

7. Back-propagation

This is the most tricky part where many of us simply do not understand. Once we have defined a loss metric e for evaluating performance, we would like to know how the loss metric change when we perturb each weight or bias.

We want to know how sensitive each weight and bias is with respect to the loss metric.

This is represented by partial derivatives ∂e/∂W (denoted dW in code) and ∂e/∂b (denoted db in code) respectively, and can be calculated analytically.

⊙ represents element-wise multiplication

These back-propagation equations assume only one datum y is compared. The gradient update process would be very noisy as the performance of each iteration is subject to one datum point only. Multiple datums can be used to reduce the noise where ∂W(y_1, y_2, …) would be the mean of ∂W(y_1), ∂W(y_2), …, and likewise for ∂b. This is not shown above in those equations, but is implemented in the code below.