Using Deep Learning to detect NCOV-19 from X-Ray Images

Original article can be found here (source): Deep Learning on Medium

A new study by Wang, et. al. shows the promise of using Deep Learning to scan for COVID-19 in Computerized Tomography (CT) scans, and it has been recommended as a practical component of the pre-existing diagnosis system. The study used transfer learning with an Inception Convolutional Neural Network (CNN) on 1,119 CT scans. The internal and external validation accuracy of the model was recorded at 89.5% and 79.3%, respectively. The main goal is to allow the model to extract Radiological features present in COVID-19.

While the study achieved stunning accuracy on their model, I decided to train and implement a model using a different architecture in the hopes of improving accuracy. I decided to use Chest Radiograph images (CXR) over CT scans for two reasons:

  1. Getting CXRs are more accessible for people than getting CT scans especially in rural and isolated areas. There will also be more potential data available.
  2. In the event radiologists and medical professionals become incapacitated from containing the virus (e.x. if they fall sick themselves), A.I. systems are essential to continue administering diagnosis.

The main obstacle for using CXRs, compared to CT scans as diagnosis sources, is the lack of details that can be visually verified coming from COVID-19. It is harder to see COVID-19 symptoms, such as pulmonary nodules, that can easily be seen in a CT scan. I want to test whether a model with enough layers can detect features in lower quality, but more practical images. My model is thus a proof-of-concept on whether a Resnet CNN model can effectively detect COVID-19 using relatively inexpensive CXRs.

COVID-19 lung scan datasets are currently limited, but the best dataset I have found, which I used for this project, is from the COVID-19 open-source dataset. It consists of scrapped COVID-19 images from publicly available research, as well as lung images with different pneumonia-causing diseases such as SARS, Streptococcus, and Pneumocystis. If you have any proper images of scans that the repository can accept, along with their citations and metadata, please contribute to building the dataset to improve AI systems that will rely on it.

I only trained my model on looking at Posteroanterior views (PA) of CXRs, which are the most common types of x-ray scans. I used transfer-learning on a Resnet 50 CNN model (my loss exploded after a few epochs on a Resnet 34), and I used a total of 339 images for training and validation. All of the implementation was done using fastai and Pytorch.

Expected Caveat: the data is heavily skewed by the lack of COVID-19 images. Taken after data was randomly split by 25%. Test set was set at 78 from the start. Image by author.

The data is heavily skewed given the lack of available public data at the time of writing (35 images for COVID-19, 226 images for Non-COVID-19, which includes images of normal and sick lungs). I decided to group all the Non-COVID-19 images together because I only had sparse images for the different diseases. I proceeded to increase the size of x-ray scans labelled “Other” using x-ray images of healthy lungs from this Kaggle dataset* before splitting the data randomly by 25%. Training set consisted of 196 images, Validation set consisted of 65 images, and the Test set consisted of 78 images (which consisted entirely of the extra dataset). The external dataset was verified not to contain any repeating images from the open-source dataset. All images were resized to 512 x 512 pixels because it performed better than 1024 x 1024 pixel images from my experience building a different classifier.