History of the Neural Networks — Part 1

Source: Deep Learning on Medium

History of the Neural Networks — Part 1

In its most general form, a neural network is a machine that is designed to model the way in which the brain performs a particular task or function of interest.

Neural Network In 5 Minutes | What Is A Neural Network – https://www.youtube.com/channel/UCsvqVGtbbyHaMoevxPAq9Fg

Neural networks resembles the brain in two respects

  1. Knowledge is acquired by the network from its environment through a learning process.
  2. Inter-neuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

The human nervous system may be viewed as a three-stage system

The brain, represented by the neural net, is central to the system. It continually receives information, perceives it, and makes appropriate decision.

The receptors convert stimuli from the human body or the external environment into electrical impulses that convey information to the brain.

The effectors convert electrical impulses generated by the brain into distinct responses as system output.

Biological Neuron

Source — Wikipedia

Neurons are structural constituents of the brain. It is estimated that there are approximately 100 billion neurons in the human cortex, and 100 trillion synapses or connections. To achieve good performance, neural networks employ a massive interconnection of neurons. Synapses, or nerve endings, are elementary structural and functional units that mediate the interactions between neurons.

Computational Model of an Artificial Neuron (Neuron)

A (artificial) neuron is an information-processing unit that is fundamental to the operation of a neural network. It contains three basic elements

  1. A set of synapses, or connecting links, each of which is characterized by a weight or strength of its own.
  2. An adder for summing the input signals.
  3. An activation function for limiting the amplitude of the output of a neuron.
https://www.youtube.com/channel/UClFVNirFb3D7SMQwWpnncwQ

Types of Activation Functions

Activation functions are mathematical equations that determine the output of a neural network. The function is attached to each neuron in the network, and determines whether it should be activated (“fired”) or not, based on whether each neuron’s input is relevant for the model’s prediction.

Threshold (Step) function

If the input value is above or below a certain threshold, the neuron is activated and sends exactly the same signal to the next layer.

Sigmoid function

Most common activation function. Brings the non-linearity to the neural network.

Signum function

Linear activation function.

History of Neural Networks

McCulloch-Pitts Neuron Model

Warren McCulloch
Walter Pitts

In 1943 Warren McCulloch and Walter Pitts created a computational model for neural networks based on mathematics and algorithms called threshold logic. (“A logical calculus of the ideas immanent in nervous activity” in the Bulletin of Mathematical Biophysics 5:115–133.)

If the activation function of the neuron model is the threshold function, the model is called the McCulloch-Pitts model. In this model, the output of a neuron takes on the value of 1 if the induced local field (net) of that neuron is nonnegative, and 0 otherwise.

Perceptron Model

Frank Rosenblatt

The perceptron algorithm was invented in 1958 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It was the first algorithmically described neural network. Rosenblatt’s perceptron is built around the McCulloch–Pitts model of a neuron.

Perceptron

Basically, it consists of a single neuron with adjustable synaptic weights and bias.

The perceptron is an algorithm for learning a binary classifier.( a function that maps its input x (a real-valued vector) to an output value y = f(x) (a single binary value))

Perceptrons can differentiate patterns only if they are linearly separable. Rosenblatt proved that if the patterns (vectors) used to train the perceptron are drawn from two linearly separable classes, then the perceptron algorithm converges and positions the decision surface in the form of a hyperplane between the two classes. The proof of convergence of the algorithm is known as the perceptron convergence theorem.

ADALINE Model

Bernard Widrow

Adaptive Linear Neuron (ADALINE) was introduced by Bernard Widrow and Ted Hoff (1960), is an implementation of an adaptive filter.

The ADALINE networks are similar to the perceptron, but their transfer (activation) function is linear (f(u) = u) rather than hard-limiting (i.e., Signum).

Ted Hoff

An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimization (adaptive) algorithm.

This is the first post from the series on Neural Networks I would be posting over a span of two weeks. So, stay tuned.