Original article was published by Celus.io on Deep Learning on Medium
Human brain as role model: Deep Learning
While nowadays everyone has an idea in mind about Artificial Intelligence (AI), Deep Learning can hardly be imagined. Learning as a term is strongly connected to humans being the most intelligent species existing on earth. It is known that all consciousness takes place in the brain and among animals, the brain-to-body-ratio is seen as a measurement for intelligence. In nature, the brain is therefore the key to intelligence. Since evolution comes up with the smartest adaptions, the next step seems logical: Trying to rebuild the brain artificially. These efforts resulted in state-of-the-art algorithms using nature as a role model: Deep Learning. Now, new challenges can be faced, that were not imaginable before learning algorithms. What humans do intuitive can be almost impossible for a computer. For instance, recognizing faces is a simple task for most of us — no matter if one half of the face is in shadow or someone wears a different colored lipstick. But how should a computer know it is the same face when it is looking not the same as in the given data? Right, it can’t. Well, it couldn’t — until Machine Learning was invented and the concept of Deep Learning has been truly a break-through.
What is Deep Learning?
Deep Learning is a part of Machine Learning, which itself is a part of Artificial Intelligence (AI).
There is basically no common definition of AI and sometimes it is hard to consider whether something can be called AI or not. The word “intelligence” is not as incentive as one could assume. One popular example to show how vague the word intelligence is is the Turing Test. Its thesis is: Something is considered as intelligent due to its behavior. Consequently, if one cannot distinguish between interfering with a computer or a human it can be entitled as intelligent. For example, when not knowing if the chat-partner is a chat-bot or a human, the bot is intelligent. Of course, the bot has no awareness at all what the topic is about and does not have an opinion itself. This shows the conflict with the wording “intelligence”. Every outcome is only possible through smart programming and training data — the more the better. Even the best machine learning software code in the world is worth nothing without data to learn from it.
What makes Deep Learning special is using nature as a role model. Put in one sentence: Deep Learning methods are enabling a machine to mimic the human brain through artificial neurons and therefore can identify important features on its own. Most methods are trying to artificially rebuild the information process in the human brain using neurons. One neuron alone is not capable of any intelligent performance, but it can process and shortly memorize data, which have previously always been two different parts of a computer: ram and memory.
For an artificial neuron the same terms as for their natural role models are used. It consists of dendrites, a cell body, axons, and synapses. The dendrites are transmitting the information to the cell body and the axon, the synapses are transmitting it to the next neuron and so forth. One artificial neuron alone is not capable of doing anything intelligent — just like in the human brain. These neurons are connected to form layers and these layers are connected to each other. The result is a complex network: a deep neural network. Of course, it is not nearly as complex as a human brain, despite using nature as a role model.
Each layer of a neural network has a special purpose, meaning it detects and analyses a certain kind of information out of the input. One leading discipline of Deep Learning is picture processing. For that, the layers within the neural network have special purposes: one layer detects if the pixels are light or dark, one layer recognizes simple shapes, one layer can identify objects and another layer recognizes specifications of this object.
Each neural network needs an input and an output layer, but it can additionally contain hidden layers. They can have different learning methods. Top-layers are trained through supervised learning, meaning it needs humans to initially provide information. The bottom-layers or hidden layers however are mostly trained unsupervised, meaning they learn from data on their own. Using image processing as an example, the top-layers identify the pictures’ structure through light and dark pixels, while the bottom-layers are more for recognizing patterns and repetitive appearances. This combination already offers new results in artificial learning, but the true milestone was to add the backpropagation algorithm. This means, the layers are trading information not straight through all layers and give a certain output, but they exchange information back and forth eliminating errors. To achieve this, first, there is an input provided for the neural network. Secondly, the outcome out of the output-layer is rated and third, it is put backwards into the neural network. But now, the neurons that made errors before are weighed less and the algorithm therefore improves by learning what is wrong. This dramatically increases the learning rate. But still, there are huge amounts of data needed for Machine Learning in order to recognize certain features: It must be pre-trained well to be able to function correctly, otherwise many mistakes will appear because the neural network has simply not seen a specification before and can’t recognize it.
A daily dose of neural networks
The concept of Deep Learning has been truly a break-through. Google for example stated that the improvements in their language processing through a learning algorithm would have needed ten years of manual programming work. But this is not the only application, where we encounter Deep Learning models in daily lives. Another example are streaming services. Netflix and Spotify, for example, are choosing their proposals based on the users’ habits by using deep neural networks. Another widely used application for Deep Learning is language processing and speech recognition. According to a study from statista, that was taken in 2019, speech-to-text is the sixth most used application of AI based Smartphone applications. But not only modern services use the artificial learning process, also spam filters are working based on Machine Learning nowadays.
To go further into detail, it will now be explained how the face detection on iOS 10 works, as image recognition is a popular application of Machine Learning. Recognizing faces is a simple task for most of us and it doesn’t matter if one half of the face is in shadow or someone wears a different colored lipstick. FaceID is basically an image processing software; however, it uses a combination of some special features. A flood illuminator uses infrared light to light up the face making it easier for the software to recognize the face. A dot projector then projects around 30.000 dots on your face. Now the previously described process of recognizing shapes and colors starts to roll through the neural network. Afterwards, an infrared camera compares the recognized features to the ones saved on the phone from the FaceID setup. Then, the backpropagation starts and the algorithm learns small changes to recognize the face even better the next time. Therefore, the algorithm will still work when the hair color is different or new accessories are worn, because there are enough matching features. Consequently, the more you use itand the greater the amount of data is, the better will FaceID work.
Learning in the future
One problem developers are facing is the limited capacity of computer power and memory on such small devices as smartphones. Deep Learning needs huge amounts of data to learn to recognize certain features. This is where we enter big data spheres. Therefore, small devices use the internet to access neural networks and to exchange data. As mentioned, the progress because of Deep Learning is enormous. Hence, it is not an option to not use Deep Learning and the devices need access to a Machine Learning algorithm.
Especially for self-driving vehicles, visual processing of their environment is crucial. A vehicle needs to recognize if obstacles are humans, objects or signs and must interpret them using AI. When the connection to the internet is being interrupted, dangerous situations can appear. Thus, researchers and developers are contemporary looking for solutions to minimize the requirements towards devices while using artificial neural networks. As the progress of Artificial Intelligence, Machine Learning and Deep Learning is making that large steps in the past years, there is no doubt it will be found a solution soon. In fact, at Northeast University in Boston, the researcher Yanzhi Wang found a promising solution: He invented an algorithm that is generated automatically, hence it can run on small devices.
Machine Learning in general is a growing part of our daily lives and will improve certainly. However, one needs to be careful with science-fiction scenarios regarding a computer’s consciousness or dreaming, like Google’s Dreamgenerator might suggest. This has nothing in common with human-level dreaming. But who knows what is yet to come…
If you are still confused about the terms of Artificial Intelligence, Machine Learning and Deep Learning, here you can find an article that may help you.