Source: Deep Learning on Medium
Radiologist-Level Pneumonia Detection on Chest X-Rays
Can you build an algorithm that automatically detects potential pneumonia cases ?
Pneumonia is an infection caused by germs getting into the lungs and airways.
In aspiration pneumonia, these germs get into the lungs because a person accidentally breathes something in instead of swallowing it.
Healthy lungs can usually handle the bacteria from these accidents and get rid of as much of it as possible by causing a person to cough.
People who have trouble coughing, are already ill, or who have compromised immune systems are more prone to aspiration pneumonia.
Aspiration pneumonia is most common in older individuals and younger children but can affect anyone.
The signs and symptoms of pneumonia vary from mild to severe, depending on factors such as the type of germ causing the infection, and your age and overall health. Mild signs and symptoms often are similar to those of a cold or flu, but they last longer.
Signs and symptoms of pneumonia may include:
- Chest pain when you breathe or cough
- Confusion or changes in mental awareness (in adults age 65 and older)
- Cough, which may produce phlegm
- Fever, sweating and shaking chills
- Lower than normal body temperature (in adults older than age 65 and people with weak immune systems)
- Nausea, vomiting or diarrhea
- Shortness of breath
Aspiration pneumonia often occurs if a person has a compromised immune system and inhales an object containing a lot of germs.
In many cases, the person will cough automatically, which will expel these unwanted particles and prevent aspiration pneumonia from developing.
People who have an impaired ability to cough may be more at risk of developing an infection from inhaling something, particularly if the object was large or was a source of infectious germs.
Chest Radiographs Basics
In the process of taking the image, an X-ray passes through the body and reaches a detector on the other side. Tissues with sparse material, such as lungs which are full of air, do not absorb the X-rays and appear black in the image. Dense tissues such as bones absorb the X-rays and appear white in the image. In short –
- Black = Air
- White = Bone
- Grey = Tissue or Fluid
The left side of the subject is on the right side of the screen by convention. You can also see the small L at the top of the right corner. In a normal image we see the lungs as black, but they have different projections on them — mainly the rib cage bones, main airways, blood vessels and the heart.
An example chest radiograph looks like this:
Pneumonia usually manifests as an area or areas of increased lung opacity on CXR.
We will use the dataset of RSNA Pneumonia Detection Challenge from kaggle. It is a dataset of chest X-Rays with annotations, which shows which part of lung has symptoms of pneumonia.
In this competition, you’re challenged to build an algorithm to detect a visual signal for pneumonia in medical images. Specifically, your algorithm needs to automatically locate lung opacities on chest radiographs.
Here’s the backstory and why solving this problem matters.
Pneumonia accounts for over 15% of all deaths of children under 5 years old internationally. In 2015, 920,000 children under the age of 5 died from the disease. In the United States, pneumonia accounts for over 500,000 visits to emergency departments  and over 50,000 deaths in 2015 , keeping the ailment on the list of top 10 causes of death in the country.
While common, accurately diagnosing pneumonia is a tall order. It requires review of a chest radiograph (CXR) by highly trained specialists and confirmation through clinical history, vital signs and laboratory exams. Pneumonia usually manifests as an area or areas of increased opacity  on CXR. However, the diagnosis of pneumonia on CXR is complicated because of a number of other conditions in the lungs such as fluid overload (pulmonary edema), bleeding, volume loss (atelectasis or collapse), lung cancer, or post-radiation or surgical changes. Outside of the lungs, fluid in the pleural space (pleural effusion) also appears as increased opacity on CXR. When available, comparison of CXRs of the patient taken at different time points and correlation with clinical symptoms and history are helpful in making the diagnosis.
What are Lung Opacities?
Opacity is any area in the chest radiograph that is more white than it should be. If you compare the images of Sample Patient 1 and Sample Patient 2 you can see that the lower boundry of the lungs of patient 2 is obscured by opacities. In the image of Sample Patient 1 you can see the clear difference between the black lungs and the tissue below it, and in the image of Sample Patient 2 there is just this fuzziness.
Usually the lungs are full of air. When someone has pneumonia, the air in the lungs is replaced by other material — fluids, bacteria, immune system cells, etc. That’s why areas of opacities are areas that are grey but should be more black. When we see them we understand that the lung tissue in that area is probably not healthy.
A Clear and Detailed Definition of Pneumonia Associated Lung Opacities
Why does the chest radiograph change when a person has pneumonia? To answer this question we have to ask what is pneumonia first.
Pneumonia is a lung infection that can be caused by bacteria, viruses, or fungi. Because of the infection and the body’s immune response, the sacks in the lungs (termed alveoli) are filled with fluids instead of air. The reason that pneumonia associated lung opacities look diffuse on the chest radiograph is because the infection and fluid that accumulate spread within the normal tree of airways in the lung. There is no clear border where the infection stops. That is different from other diseases like tumors, which are totally different from the normal lung, and do not maintain the normal structure of the airways inside the lung.
The CheXNet algorithm is a 121-layer deep 2D Convolutional Neural Network; a Densenet after Huang & Liu. The Densenet’s multiple residual connections reduce parameters and training time, allowing a deeper, more powerful model. The model accepts a vectorized two-dimensional image of size 224 pixels by 224 pixels.
To improve trust in CheXNet’s output, a Class Activation Mapping (GRAD-CAM) heatmap was utilized after Zhou et al. This allows the human user to “see” what areas of the radiograph provide the strongest activation of the Densenet for the highest probability label.
CheXNet is a 121-layer Dense Convolutional Network (DenseNet) (Huang et al., 2016) trained on the ChestX-ray 14 dataset. DenseNets improve flow of information and gradients through the network, making the optimization of very deep networks tractable. We replace the final fully connected layer with one that has a single output, after which we apply a sigmoid nonlinearity. The weights of the network are initialized with weights from a model pretrained on ImageNet (Deng et al., 2009). The network is trained end-to-end using Adam with standard parameters (ß1 = 0.9 and ß2 = 0.999) (Kingma & Ba, 2014). We train the model using minibatches of size 16. We use an initial learning rate of 0.001 that is decayed by a factor of 10 each time the validation loss plateaus after an epoch, and pick the model with the lowest validation loss.
In comparison to ChexNet, Other algorithms(Mask R-CNN, UNet, FCN, …) which contain instance/sementic segmentation tasks are very slow, and require more gpu resources, redundant parameter tunning and post-processes. Therefore, if you try to use these algorithms, you may experience difficulties in terms of training time and gpu resources.
In addition,ChexNet was able to obtain high score without additional processes(data augmentation, parameter tunning, etc…) compared to other algorithms.
It’s a loss graph up to about 2000 iteration. Since it tooks too long on kaggle kernel, I brought it. When learning, don’t be surprised of big loss values at the beginning. Stay calm and It’ll go down.
We deployed it to check how well it performs. Testing the deployed model using a new chest radiograph gave the following result:
Detecting pneumonia from images is still a difficult task to be solved using deep learning. The problem is, object detectors are good at detecting objects, which have predefined shapes and outlooks. On other hand, opacities in lungs don’t have exact shapes. This makes this problem so difficult.