Recently I wrote about my experience with the Udacity’s Self Driving Car Nanodegree (SDCND).
While pursuing this Nanodegree, I was so thrilled by the course material, that I decided to enroll in another nano-degree from Udacity at the end of my Term 2 of SDCND. This was the Artificial Intelligence Nanodegree. The first two terms of the SDCND had helped me to master the basics of Deep Learning and I wanted to explore some of the applications of Deep Learning in other domains like Natural Language Processing (think IBM Watson) and Voice User Interfaces (think Alexa). The AI-ND seemed like the perfect place to achieve this, partly due to my fantastic experience with the previous Udacity NDs.
The Artificial Intelligence ND is a bit different from the other NDs. There are a total of 4 terms and you need to pay for and complete two of them in order to graduate. In case you desire, you can also enroll for the other modules as well and complete them.
The first term is common and compulsory for all. It teaches you the foundations of AI like Game-Playing, Search, Optimization, Probabilistic AIs, and Hidden Markov Models. The topics are taught by some of the pioneers of AI like Prof. Sebastian Thrun, Prof. Peter Norvig, and Prof. Thad Starner. All the topics are covered in detail with links to additional research papers and book chapters for additional study.
The course begins with an interesting project of creating a program to solve the Sudoku problem using the concepts of Search and Constraint Propagation. You get an opportunity to play with various heuristics as you try to design an optimum strategy for the game.
The next project continues from this by implementing an adversarial search agent to play the game of Isolation. Some of the topics that were covered included MinMax, AlphaBeta Search, Iterative Deepening, etc. The project also required an analysis of a research paper. I performed the review of the famous AlphaGo paper, which can be found on my GitHub project page.
From game-playing agents we moved onto the domain of planning problems. I experimented with various automatically generated heuristics, including planning graph heuristics, to solve the problems. Like the previous project, this one also required you to perform a research review.
From planning, we moved to the domain of probabilistic inference. The final project of Term 1 required the understanding of Hidden Markov Models to design a sign-language recognizer. You also get an understanding of the different model selection techniques such as Log likelihood using cross-validation folds, Bayesian Information Criterion and Discriminative Information Criterion.
The next term focused on the concepts and applications of Deep Learning. It covered the basic concepts of Deep Learning like Convolutional Neural Networks (CNN), Recurrent Neural Network (RNN), Semi-supervised learning, etc. and then moved onto the latest developments in the filed like the Generative Adversarial Networks (GANs). At the end of the module, there was an option to choose a specialization. The three options available were Computer Vision, Natural Language Processing and Voice User Interfaces. Since the SDCND had already exposed me to the domain of computer vision and I had already worked on some NLP projects and gone through the Stanford’s CS224d to some extent, I decided to pursue the Voice User Interfaces Specialization. The project involved building a deep neural network that functions as part of an end-to-end automatic speech recognition (ASR) pipeline. The pipeline accepts raw audio as input and return a predicted transcription of the spoken language. Some of the network architectures that I experimented with were RNN; RNN + TimeDistributed Dense; CNN + RNN + TimeDistributed Dense; Deeper RNN + TimeDistributed Dense and Bidirectional RNN + TimeDistributed Dense.
One of the major features of the projects were the research components. To pass any project you had to give a detailed scientific reasoning and empirical evidence for your implementations and programs. This helped me to develop the skill of critical thinking and efficient problem solving. As is true with any nano-degree, this course was also full of interactions with people from around the world and from all aspects of industry. It was also heavily project focused which kept me excited for the entire duration of six-months.
I have continued my learning from this course by following the books “Artificial Intelligence — A modern approach” by Stuart Russell and Peter Norvig and “Deep Learning” by Ian Goodfellow, Yoshua Bengio and Aaron Courville. I still have a long way to go before I master this interesting field of AI, but the nano-degree has definitely shown me the way forward.
Source: Deep Learning on Medium