Bootstrapping your Machine Learning Journey

Source: Deep Learning on Medium

Go to the profile of donny soh

This is the first article of an ongoing series titled Learning Machine Learning. This series is primarily written to inspire aspiring machine learning (ML) practitioners with my personal learning journey. In this opening article, I will be giving a non technical historical introduction to ML and how one can bootstrap their journey into ML.

The Perfect AI “Model”

Forget XGBoosts, random forests or neural networks. I am referring to Alexandre Robicque, a graduate student in Artificial Intelligence at Stanford University. According to this article by gq, Alexandre Robicque has a modelling side gig and in 2017, he was cast in an advertisement for Yves Saint Laurent (YSL).

Source: Screen capture from a youtube ad (0:07) from YSL

Am sure this makes many ML practitioners (myself included) envious of both his brains and brawn. (Fortunately) I won’t be (mis-) advising further on his modelling exploits but I will be happy to share some thoughts on learning the craft of machine learning.

To begin on this craft, perhaps let’s go back to the beginnings of AI.

The Birth of AI

The term Artificial Intelligence was coined by John McCarthy in 1955. However the honor of writing the first AI program and coining the term machine learning goes to his contemporary Arthur Lee Samuel. The AI program that Arthur Samuel wrote played checkers and the program eventually defeated a checkers master Robert Nealey.

For a more detailed chronology of AI, I invite you to this wikipedia page for a detailed timeline. Alternatively, this article from Forbes provides an excellent historical write up as well.

A piece of history worth highlighting was the occurrence of two AI winters (1974–1980) and (1987–1993). These AI winters signified a lack of funding for AI research largely due to unmet and unrealized expectations. However as Deep Learning showed the potential for significant systemic changes to the economy (approximately from 2011), interest and funding in AI started to grow once again leading to today’s modern age of AI.

The Godfathers of AI

The foundations required for Deep Learning were laid by researchers Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. For their contributions, they were awarded the Turing Award in 2018 and are widely regarded as the Godfathers of AI.

Left to right: Yann LeCun, Geoff Hinton, Yoshua Bengio, Andrew Ng. Source: Taken from Andrew Ng’s Facebook page

One of the techniques of Deep Learning, convolutional neural networks (CNN) has been very successful in the field of computer vision, being able to recognize traffic signs (2011), handwriting (2011) and object detection (2012 Cat Detection by Andrew Ng and 2012 ImageNet competition).

Another Deep Learning technique known as Deep Reinforcement Learning was used by Google DeepMind’s AlphaGo, defeating Go Champion Lee Sedol in 2016.

Source: Screen capture from a youtube video (2:43) analysing Lee Sedol’s move

Charting the Growth of Machine Learning

However interest in Machine Learning or Deep Learning did not grow from 2011. If we compared the search trends of the terms Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), interest had stayed largely flat until 2014.

Figure 1: Google Search Trends between DL, AI and ML (2004 to now)
Figure 2: Google Search Trends between DL, AI and ML (2010 to 2015)

If we zoom into the last five (5) years from 2014 to 2019, we will notice the steadily strong growing trends of these search terms. It is also interesting to note that the popularity of the search term Machine Learning overtook that of Artificial Intelligence during this period of time.

Figure 3: Google Search Trends between DL, AI and ML (2014 to now)

Note that the trends are relative to one another. Hence a good benchmark would be to compare these search terms with other global / trending search terms. I have chosen to compare the search terms Deep Learning and Artificial Intelligence with some of the characters / actors from the widely acclaimed Game of Thrones series from HBO. In particular, I have compared it with the search term Maisie Williams who plays Arya Stark and my favourite character Tyrion Lannister played by the extremely talented Peter Dinklage (Figure 4).

Although the peak searches for the cast of GOT are much higher than that of Deep Learning or Artificial Intelligence, the comparisons in Figure 4 does show that these search terms are indeed comparable with that of the main cast from the series.

Figure 4: Google Search Trends between DL, AI and GOT Characters (2014 to now)

Returning to Figure 3, it is important to explore further if any specific events sparked such an uptick of interest in both Machine Learning and Deep Learning. I believe one of the events serving as a strong catalyst for this surge of interest is the rise of Machine Learning platforms.

The Rise of Machine Learning platforms

In my previous op-ed entitled Clearing three misconceptions about AI, I have written about how the rise of cloud computing and Machine Learning platforms have democratized these technologies.

If we conduct a trends plot between the term deep learning and some of the popular machine learning packages (in particular sklearn, tensorflow, pytorch and mxnet). It does seem to at suggest that there is a high correlation (not causality) between these search terms.

In fact the correlation coefficient between deep learning and sklearn is 0.952 while the correlation coefficient between deep learning and tensorflow is 0.972. Note that the time period used is from 29th November 2015 to 7th April 2019.

Figure 5: Google Search Trends between DL, sklearn, tensorflow, pytorch and mxnet (2014 to now)

Geographical Regions Interested in Deep Learning

Figure 6: Various cities that are hotspots in deep learning.

If we drill down to the cities that have the search term deep learning as one of the key search terms, two out of the top five come from Silicon Valley. The other three come from Asia, Beijing (China), Daejeon (South Korea) and East District (Taiwan). As the figures are all relative, it is interesting to note that Beijing is the top ranked city to come up in the percentage of Google searches for deep learning. This means that the number of searches coming from Beijing (100) in Deep Learning is more than the sum of Santa Clara (35), Sunnyvale (29) and Daejeon (28) combined (a total relative score of 92).

If you are keen to find out more about how China is competing in the AI race, I highly recommend the book by Lee Kai Fu (AI Super-Powers). Or you can view this youtube video where he shares some of his views.

How should I begin learning ML?

There are at least two different contemporary schools of thought on how one should pick up machine learning. The first approach (described by Jeremy Howard as the bottom up approach) focuses on the foundational mathematics and algorithms first, being well acquainted with concepts such as statistics and linear algebra before delving deeper into the concepts of machine learning.

The second (top down) approach swings the extreme opposite that focuses on utilizing some of the machine learning packages mentioned earlier and carrying out practice problems in them. Only when you hit a wall on how you should proceed, then that is the time when you should be looking slightly deeper at the mathematics under the hood.

I in fact subscribe towards the latter form of top down instruction. I consider being a practitioner of machine learning (ML) similar to being able to drive a car. When we start out learning how to drive, we won’t bother ourselves with the details of how the engine or gears work. Using ML packages is a lot like that. Much of the complexity has already been abstracted away and we are mainly concerned with learning how to use these tools to get us from point A to point B.

Similar to driving, you will be able to be proficient in ML with a lot of practice. However to fully gain mastery of the craft, you often have to dig deeper into how these packages are written and that is where you will be required to learn the mathematics.

This form of top down learning holds at least two advantages that I can appreciate. Firstly, it is much more interesting and satisfying as it provides you the gratification of solving complex problems from the start. Secondly it defers learning complex topics until the point you need it, making sure that you spend your precious time learning concepts that are really essential.

Bootstrapping your Machine Learning Journey

There is a huge plethora of resources available to bootstrap your machine learning journey.

If you prefer a more structured approach, your local university would be sure to offer some courses in executive education. Else you might also wish to take up some Massive Open Online Courses (MOOC) at platforms such as coursera, udacity, or edx. This link provides a great compilation of some of the leading (and very affordable) online courses.

Finally, you can also opt to jump right into the deep end of the pool by taking part in some competitive Machine Learning platforms such as Kaggle. Whenever you find yourself a bit stuck, head off to the forums for help. The learning will be steep but it will be worth it.

The author is an adjunct professor at Singapore Institute of Technology (SIT). He holds a PhD in Computer Science from Imperial College. He also has a Masters in Computer Science from the NUS under the Singapore MIT Alliance (SMA) programme.

The views in this article are that of the author’s and do not necessarily reflect the official policies or positions of any organizations that the author is associated with. The author also holds no affiliations nor earns any referral fees from any products, courses or books mentioned in this article.