Notes on Deep Learning — Linear regression in PyTorch way

Source: Deep Learning on Medium


This is fifth part of a 34-part series, ‘notes on deep learning’. Please find links to all parts in the first article.


Linear regression

At its core, PyTorch provides two main features:

An n-dimensional Tensor, similar to numpy but can run on GPUs
Automatic differentiation for building and training neural networks

Related technical guide from supervised learning

  1. Conventional guide to Supervised learning with scikit-learn — Ordinary Least Squares Generalized Linear Models
  2. Conventional guide to Supervised learning with scikit-learn — Perceptron- Generalized Linear Models

Concepts:

a) Model Class

Previously we defined our own weights (something similar to perceptron). We also defined loss and forward functions.

That’s tedious…. 
If we code everything and write all the functions as and when required, over a period of time we will be writing TensorFlow/PyTorch itself…
That is not our motive I believe :P

Writing numeric optimization libraries is always great, but building on top of prewritten libraries(like PyTorch) to get things done generates business value.

So if we are convinced by the fact, let’s use the implementation of nn package of PyTorch. Here let’s first create a single layer. We can stack sequence of layers in similar fashion or stack computational graph mentioned in previous posts.

We use linear layer:

Each Linear Module computes output from input using a linear function, and holds internal Tensors for its weight and bias.

Also note there are several other standard modules which we can adopt. 
Let’s use a Model Class format which has:
a) __init__: defining linear module
b) forward: The forward is replacement, using the model layer defined in init instead of our custom-built forward pass.

b) Optimizer

Considering we are using the nn package and modules in it, The modules usually have several hyper-parameters which are usually randomly predefined. This also points out to fact that the Pytorch module is user friendly, yet it also gives flexibility to configure underlying functioning. 
The optim package in PyTorch abstracts the idea of an optimization algorithm and provides implementations of commonly used optimization algorithms.

c) Criterion

Remember loss function we wrote. Criterion is our loss function which now can be used from torch nn module too.

If you want to get a gist in details please take a look at related technical guide:
LEARNING PYTORCH WITH EXAMPLES


So what’s going on below?

  • The machine starts with a learning rate and random weight and random hyper-parameters.
  • Pytorch auto calculates the hyper-parameters, weights, biases in pytorch way, instead of us doing it manually earlier.
  • for every iteration the hyper-parameters, weights, biases are updated.
  • why? so we get optimized hyper-parameters, weights, biases i.e. PyTorch model which gives us minimum loss and accurate predictions This is well explained in post

About the Author

I am venali sonone, a data scientist by profession and also a management graduate.



Motivation

This series is inspired by failures. 
If you want to have a talk about short 5 years or 50 years, the latter indeed requires something challenging enough to keep spark in your eyes.