Original article was published by Xomnia on Deep Learning on Medium
Bart: How deep learning can improve medical image analysis and help save lives
Medical imaging helps healthcare professionals pinpoint tumors in a patient, monitor disease progression, and create treatment plans. However, a patient’s body changes throughout their treatment and conditions during medical imaging can vary. These factors often result in a mismatch in images acquired throughout the stages of treatment.
For example, a medical image taken while a person lies on their back will look different from one taken while the person is on their stomach. Such deformations make it difficult, and at times impossible, to compare images of the same patient in different positions and at different stages of an illness or disease with the technology available today. Deformable Image Registration (DIR) is a tool that is being developed to overcome this challenge. DIR utilises algorithms to identify the spatial correspondence between two or more image sets.
Over the past year, Xomnia data scientist Bart van de Poel has been applying deep learning techniques to segment organs in medical scans. The segmentations will be used in a Multi-Objective Deformable Image Registration (MODIR) approach to provide additional guidance information, which will greatly help when solving difficult DIR problems.
We kicked off this project in 2017, with the department of radiation oncology of the Amsterdam UMC and the Centrum Wiskunde & Informatica (CWI) in Amsterdam. From 2020, the Amsterdam UMC will be succeeded as project partner by Leiden UMC. The research is part of the Open Technology Programme, which is financed by the Dutch Research Council (NWO), and is co-financed by Elekta and Xomnia.
Together, we aim to build a novel, versatile, and powerful tool for DIR.
Real world data dilemma
One of the major challenges in the current phase of the project is working with raw, real-world data instead of the ready-to-use, clean datasets typically encountered in benchmark datasets (i.e., in online competitions and grand challenges). As a result, the machine learning models don’t perform on the same level as reported in scientific research based on benchmark data.
“So far, we’ve been in the process of pinpointing the causes of this discrepancy to make sure we’re not overlooking anything on the side of the data and model training.” explains Bart. “We want to determine if the variations in the datasets are the cause before altering the model architecture to improve performance.”
Utilising real-world data also requires more time for pre-processing and validating. In one instance, the team noticed some data-leakage (information from the validation or test ending up in the training set), which caused overly optimistic results.
Additionally, the data has been collected over a longer period of time and has been annotated by many different doctors for different purposes. This leads to more noise in the labels, which makes training accurate models more difficult. Changes might need to be made in how the data is pre-processed.
“Discovering these, or other preprocessing issues, can invalidate earlier results and requires us to go back to the drawing board of dataset preprocessing and redoing experiments. This is very time consuming.”
Working with 3D medical scans, instead of 2D images, has added another layer of difficulty to the project. GPU memory limitations require the team to make concessions when it comes to the size of the model, and size of the scans that they pass to it.
Detecting landmarks in medical imaging
The process is often tedious and slow going, but Bart says important first steps are being taken in this large and ambitious project.
“We’ve made progress in obtaining good segmentation results. This has been achieved by iterating on the processing of the real-world dataset, and mostly experimenting with various configurations of existing segmentation approaches.”
Gains are also being made in other areas of the research. Members of the team recently published a paper on landmark detection, which is also a subtask of the overall project. Landmark detection involves utilising deep learning to find distinctive locations in a scan that can also likely be detected in a new scan of a patient.
“Identifying corresponding landmarks between scans can be a big help in trying to find the optimal deformable transformation that aligns one scan with another scan.”
Together with the landmarks, in a next phase the segmentation results will be incorporated into existing image registration approaches to improve these methods. This will directly enable assessing the influence of using automated segmentations instead of time-consuming human segmentations on the final registration results. After this, completely new registration approaches will be developed which can also incorporate the automatically identified landmarks and segmentations, but are particularly well-suited to deal with large deformations.
“We still have a long road ahead of us, but we are working to go beyond the proof-of-concept stage to create new methods and techniques that can ultimately revolutionise DIR. Overall, the software solution should be convenient for use by healthcare professionals. When it all comes together, radiation treatment planning can be made easier and more accurate while allowing to reduce the workload for doctors, decrease side-effects for patients, and potentially save lives.”