Source: Deep Learning on Medium

# How to Find a Descent Learning Rate using Tensorflow 2

When it comes to building and training Neural Networks, you need to set a massive amount of hyper-parameters. Setting those parameters right has a tremendous influence on the success of your net and also on the time you spend heating up the air, aka training you model. One of those parameters that you always have to choose is the so called learning rate (also know as update rate or step size). For a long time, select this right was more like trial an error or a black art. However, there exists a very smart, though simple technique for finding a decent learning rate, which I guess became very popular through being used in fastai. In this article, I present you a quick summary of that approach and show you an implementation in Tensorflow 2 that is also available through my repo. So let’s get it on.

## The Problem

The learning rate *l* is a single floating point number that determines how far you move into the direction of the negative gradient to update and optimize your network. As already said in the introduction, choosing it correctly tremendously influences the time you spend training your model until you get good results and stop swearing. Why is that so? If you choose it too small, your model will take ages to reach the optimum as you will just take tiny little baby update steps. If you choose it too large, your model will just bounce around, jumping over the optimum and eventually fail reaching it at all.

## The Solution

Leslie N. Smith presented a very smart and simple approach to systematically find a learning rate in a short amount of time that will make you very happy. Prerequisite for that is you have a model and you have a training set that is split into *n* batches.

- You initialize your learning to a small value
*l*=*l_min*, with for example*l_min*=0.00001 - You take one batch of your training set and update your model
- You calculate the loss and record both the loss and the used learning rate
- You exponentially increase the current learning rate
- You either
**go back to 2**OR**Stop**the search if the learning rate has reached a predefined maximum value*l_max*OR the loss increased too much - You take the best learning rate from all tested ones as the one that lead to the largest decrease in loss between 2 consecutive trials.

To make this all a bit more visual, I show you the smoothed loss plotted over the learning rate on a log scale. The red line marks the computed optimal learning rate.