# Notes on Deep Learning — Logistic Regression

Source: Deep Learning on Medium

This is the sixth part of a 34-part series, ‘notes on deep learning’. Please find links to all parts in the first article.

### Logistic regression

At its core, PyTorch provides two main features:

An n-dimensional Tensor, similar to numpy but can run on GPUs
Automatic differentiation for building and training neural networks

Related technical guide from supervised learning

The last post was a heavy one, I know you are looking for more exciting stuff to play with and I assure it will come ahead.
But as far as this post is considered, it is a light one…

Concepts:

a) Sigmoid
The sigmoid converts input given to the function into either a zero or one.

If we have to mark one difference from last notebook on linear regression and this notebook on logistic regression, the one thing that differs is :
1. The sigmoid has the capability to give a output between 0 and 1.
2. This sigmoid is used as input to linear layer and we use the combination of sigmoid and linear layer as our forward pass.

The sigmoid, also called binary stochastic neuron, treats the output as the probability of producing spike in a short time window.

To simply put it in words,
Sigmoid has a ‘e’ in denominator raised to variable (input to function), which maps numbers to a binary 1 or 0.

If you want to get a gist in details please take a look at related technical guide:
LEARNING PYTORCH WITH EXAMPLES

So what’s going on above?

• The machine starts with a learning rate and random weight and random hyper-parameters.
• auto calculates the hyper-parameters, weights, biases in pytorch way, instead of us doing it manually earlier.
• for every iteration the hyper-parameters, weights, biases are updated.
• why? so we get optimized hyper-parameters, weights, biases i.e. PyTorch model which gives us minimum loss and accurate predictions This is well explained in post