THE BRAIN AND THE MODEL

Terminologies and Concepts that Relates Deep Learning with the Brain

One of the factors that made Deep Learning frustrating for me was the numerous new and strange terminologies associated with it. Over time, with the help of some lectures and learning, some analogies came up in my head that helped relate Deep Learning with the brain.

DEEP LEARNING MODEL

A model is basically an equation that takes in input and produces a result (output). Models try to predict real life occurrences. Deep Learning models are just like any other models.

I guess we can relate the human brain to a model.

Different brains perform different functions. For example, Isaac Newton’s brain is good at producing intuitions based on observation, a traffic wards brain is good at controlling traffic, and so on.

general deep learning model

Permit me to refer to the deep learning model as DL-brain.?

NEURAL NETWORKS

The idea of a network is just an interaction of different nodes; like Facebook network, electrical network and internet network.

As you must have figured out, the word Biological Neural Network has to do with the unique structure and working of the brains.

Since, Deep Learning is trying to create models that perform just like the brain, it is fair enough to borrow the word neural; so we have the word neural networks for deep learning too.

The network activity involves the transfers information from node to node through neurons (biological or artificial) and the different ways these networks are connected is called its architecture.

Neural Structure of the Human Brain and DL-Brain
What happens at the node

DATA-SET

This is basically a large chunk of data we want the DL-brain to understand (learn).

You will agree with me that a new born baby’s brain is unlearned and will only perform its predefined functions like blinking, crying, smiling etc.; until the baby is exposed to the different kinds of information (data-set) in the world.

Similarly, our DL-brain has to be exposed to different information (data-sets). This process of exposing the DL-brain (model) to different data-set is called training; yeah, just like training a child in school or at home.

Learn! Learn! … Data! Data! lots of Data!

But how does the training take place in our DL brain?

WEIGHTS

At the first ever sight of a dog (say a Doberman), the human brain gets some information of what a dog looks like, but might fail to identify a Caucasian as a dog when it sees one later.

If the brain gets access to other varieties of dogs, it figures out different features that makes a dog a dog. This information is usually stored somewhere in the brain (memory); and i guess this memory is called weight in our DL-brain.

Remember our DL-brain formula (model)? The variable ‘W’ is actually the weight. It is that one value that has the precious information that makes the DL-brain label images correctly, identify sound and recognize speech when it comes across one.

In other words, weight holds specific details (information) of specific features in the data-sets that it came across.

when the DL-brain remembers an image (Image classification)

For the maths guys — It’s just that parameter that we tweak in a model during parametric studies.

So, when we say “training a model”, we basically mean getting the right values for the weight. Once a good weight has been gotten, we can say that the DL-brain has learnt the data.

But how do these weight in the DL-brain get this information?

OPTIMIZATION

Different humans learn in different ways.

Some ‘learn’ by cramming a chunk of information all at once (not recommended), while some others learn by gradual understanding of the subject (data-set).

Similarly, our DL-brain needs a mechanism that updates the weight on new information it learns about the data-set.

This mechanism that improves human understanding each time we come across a particular subject or object again is called ‘OPTIMIZATION’ in the DL-brain.

How we remember stuffs vs How DL-brain remember stuffs

LEARNING RATE

Students know that it is usually catastrophic when trying to understand many subjects or topics quickly, at the same time.

Such student tends to get confused, and the whole concept comes crashing in their head because of the information that might have been left out.

It’s usually better to read books gradually, not too fast and not too slow, until the end (until you have the right information in the book).

This is same with our DL-brain;

How quick should the optimization mechanism update the weight?

The maths guys- How much step must the optimization mechanism take before it updates the brain?

This is all defined by the learning rate of the optimization mechanism; and just like the human brain, the learning rate or steps shouldn’t be too small or too big (too fast or too slow).

It should be just sufficient for the type of data-set we are considering (the type of subject the student is reading).

ENDING WITH THIS LAST CONCEPT

Just like we have dull people (objects that carry the brain) with slow processing speed, we also have dull computer that are only useful for watching movies.

Please when training large data-sets, use high level GPU’s or borrow from the internet for computational speed.

Google Cloud, Crestle, Colaboratory — Google

Don’t be dull (Dull Brains)
Be Sharp (Sharp Brains)

CAUTION!

This is just a basic analogy that gives an overview of deep learning.

There are some other concepts associated with deep learning.

Use these resources for further study

nurture.ai AI-Saturdays(AI6)

Machine Learning by Andrew Ng

Deep Learning by Fast ai

CS231n: Convolutional Neural Networks for Visual Recognition

Special thanks to Nurture.AI, Azeez Oluwafemi,Tejumade Afonja, DevCircleLagos, Vesper.ng, and Intel for the opportunity to Learn AI

Source: Deep Learning on Medium