Introduction to Neural Networks

Source: Deep Learning on Medium

You might have heard of this cool word “Deep Learning” and wondered what it’s all about. You got hooked seeing its applications and you understand that it’s called “Deep” because of the number of layers, no more no less. Now, you want to learn more about it so you ask yourself — Where do I start? Which concepts are the prerequisite? Then you get overwhelmed seeing this.

Image from Radu Raicea

This may seem unfriendly at first but as you learn what this is and the concepts underlying it, the overwhelm will fade away and hopefully, you’ll appreciate more and be inspired to create more ideas or applications regarding it.

That’s why in this series of articles, I shall answer those questions and at the same time, share my notes from Udacity Bertelsmann’s Artificial Intelligence Track. This series covers Lesson 3 which also has same title as this article does. What’s new here is that I restructured it the way I understand it. We shall discuss it by the top down approach, that is we have the big picture and we navigate through the details as we discuss.

Now, going to the first question of where to start. The course started with the concepts regarding Neural Networks(most of the time, I will be shortening this to Neural Nets). The reason for this is, the concept of Neural Nets is the building blocks or an essential puzzle piece of Deep Learning. To expound more of this, we’ll be dedicating this whole series to Neural Nets. The links will be up once I finished the article(s) for that part. This series will be chunked into three main parts, namely:

Part 1: The Perceptron

Big Picture for Part 1: The Perceptron

The image above shows the Big Picture for this part and mainly our guide in navigating through this part. We will discuss this by using the classification problem involving two classes, or a.k.a Binary Classification where we will talk about the concepts and the parts. The next chunk under this is the Error Computation which will discuss mainly about the two types of error computation: Discrete Error Computation and Continuous Error Computation. Next to Binary Classification, we extend the notion to Multi-Class Classification and discuss which concepts stay and change or get added. As the discussion goes, the blank circles and unannotated Big Picture’s details will get revealed leaving us the whole view of what a perceptron is.

Part 2: Neural Network Architecture

Big Picture for Part 2: Neural Network Architecture

This looks familiar right? This is actually much like the same as the first picture above but the other question you might have in your mind is t — “How is this different from Part 1?” The difference is in Part 1, the story revolves on a single perceptron.

In this part, we shall dive deep Multi-Layer Perceptrons or MLP. This MLP is also called Neural Networks. We shall discuss here the parts and the different options we can have for the parts and its effects. Like in Part 1, when we end the discussion about this part, the details in the Big Picture above will get revealed.

Now that we’ve discussed about the architecture, let’s go to the one of the important algorithms you will encounter — Logistic Regression.

Part 3: Logistic Regression

Big Picture for Part 3: Logistic Regression

This part will make use of the concepts and ideas we learned from the previous parts, literally. That is, we go from definition of each part to the process that uses the parts.

We will discuss this by seeing the algorithm overview and zooming in the details of the whole process. We identify here three sub-process such as feedforward, Error Computation and Backpropagation. We will also be in touched with another algorithm inside this whole process called Gradient Descent whose math will be discussed as well.

Why learn the math?

Actually, you can implement things without learning the math and that is fine but I realized that since math concepts are one of the core principles that operates this, knowing them surely helps, especially in debugging or picking which to pick for a part. Also, things will surely makes more sense by digging further to the maths. Links for the separate articles concerning the math will also be posted.

Overall,

Learning anything at first will look like a bunch of puzzle pieces scattered in the floor, a bunch of colorful mess. Once you see how it should look, you learn how to look for the parts and see how each puzzle pieces fit together which is like learning Deep Learning. It has this big messy part at the start and you got all these processes but you just need to look where each fits or which ones do not fit to the whole puzzle piece. I hope this article shed some light or added additional information on how you understand Neural Networks.

If you have suggestions, feedbacks, especially related to the concepts discussed, feel free to drop an email to lowwwiis@gmail.com. I will be very glad to know them though replying might take a while! :((