Source: Deep Learning on Medium
The Machine Learning Prospect
This article describes the basic fundamentals of Machine Learning and the assurance of understanding what MACHINE LEARNING is to a layman in just 7 minutes!
This is my first ever blog on Medium which brings immense joy, a little bit of excitement and anxiety about how it will be presented and designed. So none the less I have done a piece of work that promises the basic and the ground-level understanding of what Machine Learning is. So further ado let’s begin…
The Machine Learning Trend
In 2006, Geoffrey Hinton published a paper showing how to train a deep
neural network capable of recognizing handwritten digits with state-of-the-art precision (>98%). They branded this technique “Deep Learning.” Training a deep neural net was widely considered impossible at the time and most researchers had abandoned the idea since the 1990s. This paper revived the interest of the scientific community and before long many new papers demonstrated that Deep Learning was not only possible but capable of mind-
blowing achievements that no other Machine Learning (ML) technique could hope to match (with the help of tremendous computing power and great amounts of data). This enthusiasm soon extended to many other areas of Machine Learning.
Fast-forward 10 years and Machine Learning has conquered the industry: it is
now at the heart of much of the magic in today’s high-tech products, ranking
your web search results, powering your smartphone’s speech recognition, and
recommending videos, beating the world champion at the game of Go. Before
you know it, it will be driving your car.
What is MACHINE LEARNING?
From a simple layman’s perspective, Machine Learning is the science of programming computers so they can learn from data.
Here is a slightly more general definition from Arthur Samuel back in 1959 which stated:
Later in the year 1997, Tom Mitchell proposed a definition that was termed as a more engineering-oriented one.
For example, your Spam Filter is a Machine Learning program that can learn to flag spam has given examples of spam emails (e.g., flagged by users) and examples of regular (nonspam, also called “ham”) emails. The examples that the system uses to learn are called the training set. Each training example is called a training instance (or sample). In this case, the task T is to flag spam for new emails, the experience E is the training data, and the performance measure P needs to be defined; for example, you can use the ratio of correctly classified emails. This particular performance measure is called accuracy and it is often used in classification tasks.
The above example is again described by the following image which will surely clear your doubts:
Why Use Machine Learning?
Let us consider the basic differences between a Traditional Approach VS the Machine Learning Approach using the set of figures.
Traditional programming is a manual process — meaning a person (programmer) creates the program. But without anyone programming the logic, one has to manually formulate or code rules. We have the input data, and someone (programmer) coded a program that uses that data and runs on a computer to produce the desired output.
Machine Learning, on the other hand, the input data and output are fed to an algorithm to create a program.
Here’s a quick distinguish between Traditional Approach VS Machine Learning Approach
Types Of Machine Learning Systems
There are various different types of Machine Learning Systems which can be classified in broad categories based on:
Let’s look at each of these criteria a bit more closely:
A Comparative study is made between the four major categories according to the amount and type of supervision they get during training.
NOTE: Here are some of the most important or commonly used algorithms in Supervised And Unsupervised Learning
Main points in Reinforcement learning –
- Input: The input should be an initial state from which the model will start
- Output: There are much possible output as there are a variety of solution to a particular problem
- Training: The training is based upon the input, the model will return a state and the user will decide to reward or punish the model based on its output.
- The model keeps continues to learn.
- The best solution is decided based on the maximum reward.
Batch(Offline) and Online Training
Another criterion that is used to classify Machine Learning systems is whether or not the system can learn incrementally from a stream of incoming data.
Comparisons between Batch/Offline and Online Training
In the simplest sense, offline learning is an approach that ingests all the data at one time to build a model whereas online learning is an approach that ingests data one observation at a time.
Offline learning, also known as batch learning, is akin to batch gradient descent. Online learning, on the other hand, is the analog of stochastic gradient descent.
Online learning is data-efficient and adaptable. Online learning is data-efficient because once data has been consumed it is no longer required. Technically, this means you don’t have to store your data.
Online learning algorithms can also be used to train systems on huge datasets that cannot fit in one machine’s main memory. The algorithm loads part of the data runs a training step on that data and repeats the process until it has run on all of the data.
This whole process is usually done offline (i.e., not on the live system), so online learning can be a confusing name. Think of it as incremental learning.
Instance-Based VS Model-Based Learning
The generalization of the category of Machine Learning Systems can also be done by this sort of learning which deals with making predictions.
In instance-based learning there are normally no parameters to tune, the system is normally hard-coded with priors in the form of fixed weights or some algorithms like tree search-based algorithms. Such a system normally does what is known as lazy learning by absorbing the training data instances and using those data instances for inference.
Model-based learning can also be seen as the opposite of instance-based learning. In model-based learning, there are parameters to tune. These parameters with optimal settings are supposed to model the problem as accurately as possible thus learning is not simply about memorization but rather more about searching for those optimal parameters.
Main HINDRANCES Of Machine Learning
There are some challenges of a learning algorithm that can go immensely wrong, includes “bad algorithm”, “bad data” etc. So let’s take a quick look before summing it up:
As I am now summing it up I can say that Artificial intelligence (AI) is a very broad field in computer science whose goal is to build an entity that can perceive and reason about the world as well as, or better than, humans can.
Machine Learning is the largest subfield in AI and tries to move away from this explicit programming of machines. Instead of hard-coding all of our computer’s actions, we provide our computers with many examples of what we want, and the computer will learn what to do when we give it new examples it has never seen before.
All that being said, machine learning and artificial intelligence will continue to revolutionize the industry and will only become more prevalent in the coming years. Whilst I recommend you utilize machine learning and AI to their fullest extent.
So that’s it for now. I hope it helps and has cleared the basic understanding of what Machine Learning is and how it is widely accepted and utilized over the years via different ways and means.
Any sort of doubts and queries are always welcome in the responses section and I will be there to clarify and answer it wholeheartedly. You can also connect with me on LinkedIn. If this article finds helpful and informative then, please hit the clap to encourage me in bringing more articles in the field of Data Science, Artificial Intelligence as well as Machine Learning.
I will definitely catch you all up in my next blog until then, I leave you out with one of the quotes of my inspiration into this field:
“If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”
— Andrew Ng