Original article was published by Naveen Kumara on Artificial Intelligence on Medium

**Applications**

From driverless cars to speech recognition, Deep learning is making everything possible. It has become a hot topic of Industry as well as academia and is affecting nearly all Industries related to ML and Artificial Intelligence (AI).

# Why you should know about Deep learning

Nearly every industry is going to be affected by AI and ML and Deep learning play a big role in it.

No matter if you are in healthcare or legal, chances are you may get replaced by a highly autonomous robot one day.

Deep learning has improved significantly in terms of accuracy over the period of years and is still evolving. Understanding its nuances will help us all.

Intelligence is the ability to adapt to change. !!! **Stephen Hawking** !!

**wide applications of Deep learning are:**

Self Driving cars : A self-driving car is the ultimate evolutionary goal of developing ADASes — Advanced Driver Assistance Systems, to the point when there’s nobody to assist anymore.

Visual tasks, including, but probably not limited to Lane detection, Pedestrian detection, and Road signs recognition, are solved with deep learning.

The importance of deep learning for autonomous driving systems can be illustrated by the fact that Nvidia maintains long-term relationships with car manufacturers, working on embedded and real-time operating systems designed exactly for these purposes.

Humanoids : In a similar fashion, Deep learning is making interacting between robots and humans simpler day by day.

We already have personal agents like Alexa and Siri, that listen to our queries and answer intelligently.

The great advances in NLP and Image processing enabled by Deep learning are the reason behind such efficient interaction.

Looking at the rate of growth of Robotics and Deep learning, autonomous robots are not that far away. A good example being Sophia, a human-like robot by Hanson Robotics.

Healthcare : The adoption of Deep learning in healthcare is on the rise and solving a variety of problems for patients, Drug discovery.

Research has shown that Deep Neural Networks can be trained to produce radiological findings with high reliability by training from archives of millions of patient scans collected by healthcare systems.

# Implementation of Deep learning

Given that Deep learning is implemented by large Artificial Neural Networks (or simply Neural Networks or NN), let’s find out more about them.

# What’s an Artificial Neural Network

Artificial Neural Network is a network of interconnected artificial neurons (or nodes) where each neuron represents an information processing unit.

These interconnected nodes pass information to each other mimicking the human brain.

The nodes interact with each other and share information. Each node takes input and performs some operation on it before passing it forward.

The operation is performed by what is called an Activation function (non-linearity). It converts the input into output which can be then used as input for other nodes.

**Artificial Neural Network**

The links between nodes are mostly weighted. These weights are adjusted based on the performance of the network.

If the performance (or accuracy) is high, then weights are not adjusted, but if the performance is low, then weights are adjusted through specific calculation.

The leftmost layer of neurons is called the input layer and similarly, the rightmost layer is called the output layer. All the other layers in between are called hidden layers.

In a nutshell, an Artificial neuron takes input from other nodes and applies the activation function to the weighted sum of input (Transfer function) and then passes the output.

A threshold (called Bias) is added to the weighted sum to avoid passing no (zero) output.

For knowing more about Neural Networks, check NEURAL NETWORKS by Christos Stergiou and Dimitrios Siganos. They’ve done a good job.

# How are Neural Networks used for Deep learning

For Deep learning, several Neural Network layers (> 100) are connected in feedforward or feedback style to pass information to each other.

Feedforward: This is the simplest type of ANN. Here, the connections do not form a cycle and hence has no loops.

The input is directly fed to output (in a single direction ) through a series of weights. They are extensively used in pattern recognition.

This type of organization is referred as bottom-up or top-down.

Feedback (or recurrent): The connections in feedback network can move in both directions. The output derived from the network is fed back into the network to improve performance (loops).

These networks can become very complicated but are comparatively more powerful than feedforward. Feedback networks are dynamic and are extensively used for a lot of problems.

Now let’s discuss some specific types of ANN extensively used for DL.

**Most popular ANNs used for Deep Learning**

1) Multilayer Perceptrons: These are the most basic Neural Networks with feedforward networks. They generally use non-linear activation functions (like Tanh, or Relu) and compute the losses through Mean Square Error (MSE) ,Logloss, categorical_crossentropy.

The loss is back propagated to adjust the weights and make the model more accurate.

They are generally used as a part of a bigger deep learning network. Read more about Multiple Perceptrons here: Intro to Multiple Perceptrons.

2) Convoluted Neural Network: Convoluted Neural Networks (ConvNet or CNN) are similar to ordinary Neural Networks but their architecture is specifically designed for images or videos as input.

In particular, unlike a regular Neural Network, the layers of a ConvNet have neurons arranged in 3 dimensions: width, height, depth.

They are particularly suitable for spatial data, object recognition and image analysis using multidimensional neurons structures.

One of the main reason for the popularity of the deep learning lately is due to Convoluted Neural Networks. Some of the common usages of Convoluted Neural Networks are self-driving cars, drones, computer vision and text analytics.

Read more about the dynamics of Convolution Neural Network here : Convoluted Neural Networks.

3) Recurrent Neural Networks: RNNs are also a feedforward network, however with recurrent memory loops which take the input from the previous and/or same layers (backpropagation).

Here connections form a directed graph along a graph. This gives them a unique capability to model along the time dimension and arbitrary sequence of events and inputs.

In simpler terms, for any given instant, the network maintains a memory up till that moment and therefore can predict the next action.

Most common types of RNN model is Long Short Term Memory (LSTM) network. RNNs are used for next work prediction and grammar learning.

This post aimed at providing a brief introduction to the massive field of Deep learning.

I have skipped mathematical details of some of the concepts discussed to facilitate understanding. Thanks for reading!

Stay tuned for more articles.

please feel free to contact me for any queries related to above concepts , happy to help to learn from each other “naveen.kumara571423@gmail.com”

“““ Failure & Success are the path of our journey,when we rejected we work with Exploratory Data Analysis, Feature Engineering and Feature Selection to choose the right path in journey & Success is also not permanent path at all because we have to learn( train & test) , fit to environment ( Normalisation and standardisation) and optimize(Adam, Rmsprop) our model( journey) for the best accuracy. ””