Source: Deep Learning on Medium
WEEK 2: Malaria Parasite Detection
Hi everyone! This week we will explain in detail the works related to our project. You can find first week here.
We have examined 3 works related to our project. In the first of these, Keras deep learning framework was used to construct convolutional neural network. The optimizer used for the training of the model is “Adam” and the model has been trained and verified throughout 25 epochs.
And finally, the confusion matrix of the first related works is as follows:
As it is understood from the confusion matrix, the error rate of the model is zero. But the author has not written anything about overfitting. We do not know if there is overfitting.
You can find the second related work here:
In this related work, Bayesian Approach and Support Vector Machine methods are preferred. After the necessary data corrections and processes are done, the result toward performance analysis of the feature selection-cum-classification scheme has been obtained in order to select optimum set of features for achieving the highest accuracy in both the learning techniques shown in the table below:
In Bayesian learning based method, accuracy 84% when 19 most significant features.
In SVM learning based method, accuracy 83.5% when 9 most significant features.
You can find the last related work here:
In this related work, predictive models were evaluated with five-fold cross validation. Patient-level cross-validation was performed to alleviate model bias and generalization errors. The number of cells for different folds is shown as follows:
The images in the dataset are resized to specific resolutions to meet the input requirements of customized and pre-trained CNNs and normalized to assist in faster convergence.
A customized, sequential CNN was proposed and used to classify parasitized and uninfected cells. They evaluated the performance of this CNN.
Architecture of the customized model is shown as follows:
The proposed CNN has three convolutional layers and two fully connected layers. The input to the model constitutes segmented cells of 100 × 100 × 3 pixel resolution. The convolutional layers use 3 × 3 filters with 2 pixel strides. The first and second convolutional layers have 32 filters and the third convolutional layer has 64 filters. The sandwich design of convolutional/rectified linear units (ReLU) and proper weight initialization enhances the learning process . Max-pooling layers with a pooling window of 2 × 2 and 2 pixel strides follow the convolutional layers for summarizing the outputs of neighboring neuronal groups in the feature maps.
After that, they evaluated ready-trained CNN performances which are AlexNet, VGG-16, ResNet-50, Xception and DenseNet-121.
They instantiated the convolutional part of the pre-trained CNNs and trained a fully-connected model with dropout on top of the extracted features. In addition, the most optimal layer for property extraction to aid in advanced classification is determined experimentally. They evaluated the performance of pre-trained CNNs in terms of accuracy, sensitivity, specificity, and many other aspects. Performance metrics are shown as below:
Candidate layers giving the best performance:
Performance metrics achieved with feature extraction from optimal layers:
The customized model came close to an optimal solution due to the implicit regularization imposed by hyper-parameter optimization, smaller convolution filter sizes and aggressive drops in fully connected layers.
The images in our dataset are different in size. For this reason, images need to be resized for data preprocessing. We will resize images to same size. And also to improve local brightness and contrast, we intend to apply a normalization to improve local brightness and contrast. So we’ll make our loss rate lower.
See you next week…