Original article was published by Sarah Sheikh on Deep Learning on Medium
Step 1: Dataset Collection, Preparation and Pre-processing
To begin with our experiments we first need a dataset. The dataset could be collected either directly from hospitals or in case if it is too difficult or time-consuming to get the real data from a hospital, a publicly available dataset from the internet could then be used. For our study, we created our own custom dataset built by clubbing images from 3 different datasets viz. EyePacs Dataset, APTOS 2019 dataset, and the Messidor 2 dataset. The links to these datasets are given below:
We performed data normalization on images from the 3 datasets in a way that each of the 5 classes has relatively the same number of images. This was done to ensure that the dataset is not biased towards any one particular class. We used partial datasets from EyePacs and APTOS 2019 dataset and used 50% of the Messidor2 dataset to create this custom dataset for training our classifier in our experiments and we only chose images which were of good quality and would contribute good features to the classifiers. The other 50% of the Messidor2 dataset was kept aside for testing the performance of our classifier.
Further when we worked on building a lightweight mobile-friendly MobileNetV2 model which gave promising results. We employed certain image preprocessing techniques to improve the quality of the images. We changed the luminous intensity of the image which altered the brightness and made the details of the image more visible. We considered changing alpha, beta, and gamma channels which are the important channels that control the amount of the light. We checked for images that were too dark or over-bright and could be augmented by altering the alpha, beta, and gamma channels and controlled the light by using alpha = 2.5, beta = 40, and gamma = 1.44. These values were obtained using the trial and error method on several dim images using OpenCV’s ConvertscaleAbs function. In order to transform the image to perform texture analysis with an enhanced signal to noise ratio and provide better luminance ranges to the input images we used OpenCV’s Bioinspired_retina function on some of the retrieved dim images and then augmented some particular parameters.
These preprocessing steps helped us in bringing up enriched features from the images, following which, we applied data augmentation using horizontal and vertical flips and rotation using -20 to +20 degree rotation. We were able to bring 3,400+ images in each class with a total of 17,121 images in the final dataset. 80 % of the images were used for training, and 20 % were used for validation. We tested the performance of the dataset on unseen data, which was 50% of the Messidor2 dataset kept aside specifically for this purpose.