Source: Deep Learning on Medium
First of all thanks to deeplearning.ai team and specially Andrew Ng for releasing this awesome Specialisation on Deep Learning which covers basics of almost every important sub-topic of Deep Learning.
I had started this Specialisation in last winter till then only 4 courses was released. I had completed all of them in a week. After that, last course ( Sequence Models) was released at January end when my college had started. So, I was able to do the last course in this summer, but it has really interesting assignment with all different types of Sequence models discussed in it. Also, It was amazing experience to do all this course and their assignment.
Initially courses will be really easy if you have some background knowledge of Deep Learning, Neural Networks and Machine Learning Projects. So, if you want to directly solve their Assignments you can but if their is any problem arise then you have a option to comeback and have a look at videos for solving doubts. I did this for first 3 courses of this Specialisation.
I am here discussing briefly about courses and what you can learn from their content. Also, later I will share my notes on all Five courses here.
Specialisation consists of five courses namely:
- Neural Networks and Deep Learning
- Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
- Structuring Machine Learning Projects
- Convolutional Neural Networks
- Sequence Models
Neural Networks and Deep Learning
It is first course of this Specialisation which is easier for those who have good knowledge of Neural Network, Machine Learning and basics of Deep Learning. In this course he talks about following:-
- Week 1: This is very short week, in this course Andrew Ng taught about Introduction to Deep Learning, Supervised learning with Neural Network.
- Week 2: In this week you will learn about binary classification, Logistic Regression, Cost Function, Gradient Descent, Computation Graph, Vectorization in Python, tour on iPython/Jupyter Notebooks, Python basics with numpy.There will one final progamming assignment on Logistic Regression with a Neural Network.
- Week 3: In this week you will learn about Neural Networks Represntation, linear and non linear Activation functions, Forward pass, Backpropagation (Backward pass). In this you have one assignment with topic Planar data classification with a hidden layer.
- Week 4: In this week you will finally able to put all blocks learned in above weeks in order to make a Deep Learning model. In this week you will learn more about forward pass, backward pass, Neural Net will be generalize to more than one layer. This week consists of two assignment one with making Basic Deep Learning model with forward and backward pass implemented and second one on Application of Deep Neural Network in which one can classifies between cat vs. non-cat images.
Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
- Week 1: In this week you will learn how to Setup your machine learning Application by understanding bias/variance, doing train/dev/test split of data. Regularizing and adding dropout to a layer and then you will learn about various gradient descent problems like diminishing gradient descent and then gradient checking can be learnt. In this you have 3 assignments one in of Intialization of hyper parameters, second is on Regularization, third is on Gradient checking.
- Week 2: In this week you will learn about Mini-batch gradient descent, exponentially weighted averages, gradient descent with momentum, RMSprop, Adam Optimization, learning rate decay, problem of local Optima. In this you have one assignment regarding Optimization from which you will understand the intuition between Adam and RMS prop, importance of mini-batch gradient descent and you will also Learn about effects of momentum on the overall performance of your model.
- Week 3: In this you will do Hyper-parameter tuning, Batch Normalization, Multi-class classification, Introduction to DL frameworks like Tensorflow. There is one assignment regarding hands on DL with Tensorflow.
Structuring Machine Learning Projects
- Week 1: In this week Andrew Ng talks about Strategy to follow for ML projects. In this you will also learn about how data will be distributed into train/dev/test sets and he also compares model accuracy to that of human accuracy and also talks about avoidable bias.
- Week 2: In this you will learn on how to do Error Analysis and cleaning up incorrectly labelled data. Then you will learn what you can do when train and test are on different distributions(mismatched distribution), and then bias and variance with mismatched distribution after that you will learn more about Transfer learning, Multi-task learning and finally about end-to-end Deep Learning.
Convolutional Neural Networks
It is fourth course of this Specialisation which is little bit more interesting than first three course. In this course he talks about following:-
- Week 1: In this you will learn about edge detection in Images. He also discusses idea of how we can convolute an Image by a kernel with pooling, striding and padding and also discusses little bit of Computer Vision. It has two assignment one is of making a basic convolution model and other one is Build and train a ConvNet in TensorFlow for a classification problem.
- Week 2: In this you will learn about different convolution neural networks like LeNet-5 , AlexNet, VGGNet and ResNet Network used for classification of Image. After that he discussed about Inception Network and then he discusses how these Nets can be applied to other Datasets using Transfer Learning. Also, he talks about Data Augmentation which can be needed to make more data. In the end of this week you will have two Assignments one is on Happy House whose main intent is to make you familiarize with Keras on tensorlfow and other one is on Residual Networks from which you will learn on how to train a state-of-the-art neural network for image classification.
- Week 3: In this you will learn about Object Localization, Landmark Detection, Sliding Window Approach, Bounding Box Prediction, IOU calculation, Non-max suppression, Anchor Boxes, YOLO for Object Detection, Regional proposals Algorithm in brief. In this week there is only one Assignment in which you will make a model for Car Detection in Image using YOLO.
- Week 4: In this week you will learn how to make a model for Face Verification and Face Recognition, Siamese Network, how the triplet loss can be implemented which is an effective loss function for training a neural network to learn an encoding of a face image, Deep ConvNets, Cost loss function, Content loss function and also about Transfer Learning. It has two assignment one of Face Recognition for the Happy House and second one for generate novel artistic images using neural style transfer algorithm.
This is best course in this Specialisation in which you will gain more in-depth knowledge of how neural networks can be designed.
- Week 1: In this you will learn about Recurrent Neural Network, backpropagation in RNN, Different types of RNN(many to many, many to one, one to one, one to many), Language Model, Sequence Generation, vanishing gradient in RNN, GRU(Gated Recurrent Unit), LSTM(Long short Term Memory), Bi-directional RNN, Deep RNNs. In this you have three assignment, first is on building a RNN model step by step, in the second you will learn to make character-level text generation recurrent neural network and find out why clipping of gradient descent is required, and in last i.e. third assignment you will Apply an LSTM to music generation and Generate your own jazz music with deep learning.
- Week 2: In this week you will learn about word embeddings their properties, where they can be used. Hence you will be able to convert word to vector using different algorithms like word2vec, Glove word vectors. Here you will also learn about negative sampling. So this week consist mostly part of Natural Language Processing(NLP). In the end of this week you will able to do debiasing of Word Embeddings. This week consist of two Assignment one is on word embeddings to solve word analogy problems, in second you will implement a model which inputs a sentence and finds the most appropriate emoji.
- Week 3: In the last week of last course you will learn about various sequence to sequence Architectures, picking a most likely word in text generation through beam search, and then refinements and error analysis on beam search and then their will be Attention model mechanism comes. After that there will be brief description of how audio data can be feed into seqtoseq model and implemented on trigger word detection. It has two Assignments, first one is on Neural Machine Translation with Attention Mechanism and second one is on Trigger Word Detection.
After the completion of this course, I am now fully confident that I can do any Deep Learning, Machine Learning Projects on my own because this course not only give me information about different tools(Keras, Tensorflow, Pandas and Numpy) and architectures but it also has given me idea about how the real world AI works and how Machine Learning pipeline can be created which can convert noise data to useful data through Data Cleaning and feature engineering and then model will be created which can provide AI solutions to real world problems.
I will update this Article and will share the link to my notes as soon as possible. Till then enjoy and thanks for reading.