Artificial Neural Network — Part 1

This is the first in a series of posts on Artificial Neural Network. You can find links (in future) to the other posts in this series at the bottom of the post.

Hello Readers!

Welcome to the intuition blog for the artificial neural networks. In this section and the following posts, we will learn the following things.

First of all, we will see about the neuron so there’ll be a little bit of neuroscience and we’ll find out a bit about how the human brain works and why we are trying to replicate that. And we will also see what the main building block of a neural network of the neuron looks like.

Then in the next post, we will learn about the activation function and we’ll look at a couple of examples of attrition functions that you could use in your neural networks and we’ll find out which ones of which one of them is the most commonly used one in neural networks and in which layers you’d rather use which functions.

And then we will move on to understanding how neural networks learn.

Later on, we’ll see about gradient descent. This is also part of neural networks learning and we’ll understand how that algorithm is better than just the brute force method that you might be intending or willing to take as a first resort or first method that comes to mind. So we’ll find out how great the advantage of gradient descent are and then we’ll talk about stochastic gradient descent. It’s a continuation of the gradient decent concept but it’s an even better and even stronger method and we’ll find out exactly how it works.

And finally we’ll wrap things up by mentioning the important things about back propagation and summarizing everything in a step by step set of instructions for running your artificial neural networks.

So, lets start

The most hyped concepts doing most of the rounds here — Deep Learning. Jeffrey Hinton (godfather of deep thing) did research on deep learning in the 80’s and he’s done lots and lots of work lots of research papers he’s published in deep learning right now. He works at Google. So a lot of the things that we’re going to be talking about actually come from Jeffrey Hinton and you can see a lot he’s got quite a few YouTube videos.

The idea behind deep learning is to look at the human brain. In these tutorials, what we’re trying to do here is to mimic how the human brain operates. And you know we don’t know that much you don’t know everything about the human brain but that little man that we all know we want to mimic it and recreate it. So let’s see how this works.

Here we’ve got some neurons so these neurons which have been smeared onto glass and then have been looked at under a microscope with some coloring. And this is what they look like so they have like a body they have these branches and they have like tails and so and so you can see them they have like a nucleus inside in the middle and that’s basically what a neuron looks like in the human brain.

There’s approximately 100 billion neurons all together so these are individual neurons these are actually motor neurons because they’re bigger they’re easier to see but nevertheless there’s a hundred billion neurons in the human brain and it is connected to as many as about a thousand of its neighbors. So to give you a picture this is what it looks like. And so that’s that’s what we’re going to be trying to recreate. So how do we recreate this in a computer.

Well we create an artificial structure called an artificial neural net where we have nodes or neurons and we’re going to have some neurons for input value so these are values that you know about a certain situation. So for instance you’re modeling something you want to predict and have some input to start your predictions then that’s called the input layer.

Then you have the output. So that’s the value that you want to predict or it’s surprise whether it’s is somebody’s going to leave the bank or stay in the bank, Is this a fraudulent transaction it’s a real transaction and so on. So that’s going to be output layer.

And in between we’re going to have a hidden layer. So as you could see in your brain you have so many neurons and some information is coming in through your eyes, ears, nose i.e. basically your senses. And then it’s it’s not just going right away to the output where you have the result is going through all of these billions and billions and billions of neurons before guess output and this is the whole concept behind it that we’re going to model the brain. So we need these hidden layers that are there before the output. So the input layer neurons connected to a hidden layer neurons and that neurons connect to output.

Where is the deep learning here or why is it called deeper nothing deeper in here. While this is kind of like an option which one might call shallow learning where there isn’t much indeed going on.

But why is it called deep learning. Well because then we take this to the next level we separate it even further and we have not just one hit and there. We have lots and lots and lots of hidden layers and then we connect everything just like in the human brain connect everything interconnected everything and that’s how the input values are processed through all these hidden layers just like in the human brain. Then we have an output value and now we’re talking deep learning. So that’s what learning is all about on a very abstract level.

Source: Deep Learning on Medium