Written by Shubhang Desai
As a popular and cheap modality of diagnosis, ultrasound provides the opportunity to collect large amounts of medical imaging data. Since most diagnoses done by radiologists can be framed as classification tasks, it is natural to attempt to apply machine learning to these images. That being said, as of right now, ultrasound imaging is not a modality which is being explored in-depth by the machine learning community. This blog post will:
- Give you some background on ultrasound
- Explain what the state-of-the-art in ML applied to ultrasound is
- Conduct a literature review on ultrasound tasks upon which ML has been applied
- Outline current challenges and open questions in the problem
What is Ultrasound?
Ultrasound is a technique in which a transducer that emits ultra-high frequency sound wave is placed on the skin . The sound waves reflect off of the organ boundaries in the body and are in-turn picked up by the transducer. The time from initial emission of the wave and the return time allows scanner to create an image of the inside of the body.
Two main “flavors” of ultrasound exist: B-mode and Doppler. In B-mode ultrasound, the reflected sound waves create simple a still image of the anatomy. Meanwhile, in Doppler ultrasound, the distortion of the sound waves due to movement in the body is used to show the flow of fluids, such as blood through the veins. These movements are color-coded, making Doppler ultrasound scans 3-dimensional, while black-and-white B-Mode scans are 2-dimensional. In both cases, still images can be taken in quick succession to create videos.
Ultrasound can be used to take simple images of the anatomy (known as anatomical ultrasound), or can even give more complex information about the body, such as flow of blood or softness of tissue (known as functional ultrasound). Ultrasound can also be used interact with tissue through the use of high-intensity sound beams (known as therapeutic ultrasound) — an example of such as interaction would be destroying blood clots. Anatomical and function ultrasound produces images/videos that we can apply machine learning on, while therapeutic ultrasound does not.
Ultrasound is an incredibly inexpensive and portable modality of diagnosis. The procedure is non-invasive and quickly gives radiologists information necessary to make diagnoses. Sonography machines are being made smaller and smaller (see: an ultrasound probe that can attach to a smartphone), making them more and more accessible to developing countries. As such, achieving radiology-level performance on ultrasound images would deliver impactful and feasible medical solutions to such countries.
Check out this NIH resource to learn more about ultrasound imaging: https://www.nibib.nih.gov/science-education/science-topics/ultrasound.
Working with Ultrasound Data
Ultrasound images generally come in a file format known as DICOM. This is a standard medical imaging format that stores the pixel values for scans produced by various modalities, as well as additional parameters about the test. They are saved as ‘.dcm’ files. A great package to use to work with this format in Python is the PyDicom package.
There is not a lot of preprocessing necessary for ultrasound images, unlike modalities such as CT or X-Rays. Because the pixel values stored in the DICOM files directly reflect how a radiologist might see it, it is generally fine to keep the images as they are. Plus, the grainy nature of ultrasound images makes it quite difficult to isolate certain structures, such as veins, using traditional computer vision techniques, making preprocessing cumbersome and not worth the effort.
Although ultrasound has not be heavily explored in ML, there are a few papers which make up the current state-of-the-art on the task. Below, some recent work on the task is discussed.
The most basic task for ML applied to ultrasound images is identification: given a scan, identify whether or not (and often times where) it contains an abnormality. Surprisingly, prior to around 2010, there seemed to be a push to apply neural networks (often referred to as Artificial Neural Networks, or ANNs, in medical papers) to help solve medical problems. During this time, there were papers on applying ANNs to ultrasound images to identify liver diseases , prostatic cancer , breast nodules, and deep vein thrombosis (DVT) . These ANNs were what we call Fully-Connected Networks today — the most basic type of network which takes only a feature vector of numbers and has no weight sharing (such as in Convolutional Networks, used for images). In other words, these papers have models which would extract numerical features from the ultrasound images to pass through a shallow Fully-Connected Network.
About five years later, there was a period where traditional ML techniques were applied to the ultrasound problem: using Binary Decision Trees to detect DVT , and logistic regression  and SVMs  to classify breast tumors. It was only until recently that Deep Learning has been applied to identification on ultrasound. Ravishankar et al. started with a CNN trained on ImageNet and, using transfer learning, taught it to detect kidneys in ultrasound images . Zheng et al. also used transfer learning to detect abnormalities in kidney and urinary tract ultrasounds . These early results are an extremely promising indication that ultrasounds are a prime nail to hit with the CNN hammer!
A common application of ML to ultrasound is noise reduction: increasing the quality and/or resolution of the low-quality ultrasound scans. Projects also attempt to create ultrasound images from auxiliary inputs. These projects can be grouped under the banner of “generation”, as they rely on convolutional networks which generate images in order to accomplish their task.
A recent paper makes use of a series of Generative Adversarial Networks to transform echogenicity maps into realistic ultrasound images . Their pipeline consists of a physics-based B-mode simulator and two GANS which incrementally refine the initial simulation until a realistic output is achieved. This approach solves the problem that current systems of simulations face: the need to solve computationally intractable equations in order to produce the simulations. In this system, the GANs can produce the realistic scan output instantly given the B-mode simulation.
There is also work in increasing the quality of actual ultrasound images. A recent paper uses Convolutional Networks to transform speckled, blurry ultrasound images into CT-quality images ; an even more recent paper uses a fairly simple convolutional architecture to increase the resolution of portable ultrasound machines . These papers attempt to leverage the accessibility of ultrasound modality to produce superior images— literally, using AI to make ultrasound a better method of diagnosis!
Segmentation is the task of taking an input and highlighting regions of interest. In the context of medicine, this may mean coloring a problematic area in a scan in order to call attention to it. For the specific modality of ultrasound, a popular segmentation challenge is finding cancerous tumors in breast ultrasound (BUS) scans. A recent benchmark study tested and compared the effectiveness of various machine learning approaches to the task by aggregating a fairly large (562 images) dataset of B-mode BUS scan and testing the approaches on it .
The study compares the effectiveness of five state-of-the-art approaches which use domain-features and other traditional computer vision methods to do segmentation. It seems that work so far in ultrasound segmentation relies on traditional CV techniques; it will be interesting to see how this may change if and when Deep Learning is applied to this problem.
Challenges & Open Questions
Although ultrasound is certainly a cheap and convenient modality, there is not an abundance of labelled ultrasound images publicly available for machine learning tasks. As such, one of the biggest challenges is not enough data. This has been addressed so far by making use of transfer learning methods, in which a network trained on image classification is further trained to do classification on ultrasound images. An open question is whether or not systems trained on a large dataset of ultrasound images, given that it can be provided, would perform better than systems trained using transfer learning.
Because the amount of ultrasound data which is available is limited, an issue is also the fact that we don’t make use of one of the most interesting features of sonography: the ability to take videos. Given a sequence of ultrasound data, it would be a fantastic experiment to feed the images into a recurrent network at each time step to try to predict a diagnosis. The performance of a system trained on a time series of ultrasound images is still an open question.
Most of the data collected from ultrasound machines for machine learning tasks is B-mode ultrasound. How would a system trained on Doppler ultrasound perform? The added input of blood/fluid flow would give the machine learning system an additional feature upon which to make a prediction, possibly benefitting its performance. Whether or not this is actually this case is still an open question.
There is still a lot of work to be done on applying machine learning to ultrasound images. However, historical work and very recent resurgence of interest, in addition to the ease and practicality of the modality, make it an incredibly ripe problem for machine learning to tackle. As AI becomes more and more integral to healthcare, it will be interesting to see how diagnosis processes involving ultrasound are impacted — and how this impact can possibly benefit billions of people without access to doctors around the world.
I’d like to thank Matt Lungren MD MPH, Assistant Professor of Radiology at the Stanford University Medical Center, for his guidance and feedback throughout the writing process. I’d also like to thank Pranav Rajpurkar, Jeremy Irvin, Tanay Kothari, Aarti Bagul, and Nick Bien of the Stanford Machine Learning Group for their comments.
 Computer-aided Diagnostic System for Diffuse Liver Diseases with Ultrasonography by Neural Networks. Ogawa et al. 6 December 1998.
 Artificial neural network analysis (ANNA) of prostatic transrectal ultrasound. Loch et al. 14 April 1999.
 Computer-aidied Diagnosis of Solid Breast Nodules on Ultrasound with Digital Image Processing and Artificial Neural Network. Joo et al. 1 September 2004.
 Comparative Neural Network Based Venous Thrombosis Echogenicity and Echostructure Characterization Using Ultrasound Images. Dahabiah et al. 16 October 2006.
 Predicting Deep Venous Thrombosis Using Binary Decision Trees. Nwosisi et al. October 2011.
 Computer-Aided Diagnosis for the Classification of Breast Masses in Automated Whole Breast Ultrasound Images. Moon et al. April 2011.
 Combining support vector machine with genetic algorithm to classify ultrasound breast tumor images. Wu et al. 13 May 2011.
 Understanding the Mechanisms of Deep Transfer Learning for Medical Images. Ravishankar et al. 20 April 2017.
 Transfer Learning for Diagnosis of Congenital Abnormalities of the Kidney and Urinary Tract in Children Based on Ultrasound Imaging Data. Zheng et al. 31 December 2017.
 Simulating Patho-Realistic Ultrasound Images Using Deep Generative Networks with Adversarial Learning. Francis Tom and Debdoot Sheet. 8 January 2018.
 Deep Learning in RF Sub-sampled B-mode Ultrasound Imaging. Yoon et al. 21 December 2017.
 Towards CT-Quality Ultrasound Imaging Using Deep Learning. Vedula et al. 17 Oct 2017.
 A Benchmark for Breast Ultrasound Image Segmentation (BUSIS). Xian et al. 9 January 2018.
 Ultrasound. National Institute of Biomedical Imaging and Bioengineering. July 2016.
Sound the Alarm! Deep Learning & Ultrasound Scans was originally published in Stanford AI for Healthcare on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source: Deep Learning on Medium