How to easily train a 3D U-Net or any other model for lung cancer segmentation.

After significant state of-the-art (SOTA) results of convolutional networks, it’s no secret that deep learning community is actively trying to hit a milestone in medical imaging as well: Stanford is preparing Medical ImageNet, many researchers are investigating applicability of neural networks in healthcare. One hot topic in the area is lung cancer detection made on Computed Tomography (CT) scans. That’s how a CT-scan looks like:

Typical CT scan with lungs in 3D

Everyone who worked with CT-scans knows that preprocessing is a painful task: every file weights 300+MB, while areas of interest is usually limited to extremely small zones. Feeding the whole 3D scan to neural network is almost impossible and memory inefficient, that’s why you either try 2D, slice-by-slice, or tricky approaches of cropping zones of interest. So, for a popular LIDC-IDRI database, 1018 DICOM series weights ~124 GB, preprocessing and training network time may be very long (up to 24 hours, depending on your machine), you need to code a lot.

Say, you want neural network to segment lung cancer nodules on scans. Task requires input and output for the network: scans with lungs and masks where nodules would be marked.

Slice of CT scan and corresponding mask with cancerous nodules (yellow)

To do this, you need to do lots of stuff: load CT-files in DICOM, RAW or other format, preprocess them, fetch annotations from csv, create corresponding masks and finally feed it to the network. It is quite hard and long process, another way is to use RadIO, which makes preprocessing simpler and has lots of already written methods, so it greatly saves time to squeeze maximum from networks.

Let’s see how it works:

1. Index the data in Dataset

Index all the data and create a Dataset, which represents all the raw files and lets do the cool thing: iterate data in batches, just like when you are training neural network. Batch is a subset of indices and associated data (called ‘components’) necessary for running both, preprocessing AND training network.

For example, segmentation task require each Batch to have ‘images’ and ‘masks’, that’s where are 3D scans and binary masks as arrays; and also origin’ and ‘spacing components, in case you need them. You can add as many components as you like, create ‘nodules’ component and put (z, y, x) coordinates of nodules in it. RadIO’s internal methods automatically update nodules locations (if spacing changes).

2. Preprocess with pipelines

After creating Dataset you can run preprocessing, RadIO has some fancy methods which you can easily try. And it is convenient to chain different methods altogether, creating clear and easily-read pipelines. As it’s often inefficient to preprocess scans one-by-one, RadIO tries to load Batch of scans at once (taking maximum of memory), stack and preprocess whole-batch-at-once trying to run in parallel, if possible.

Here we load LUNA dataset, split it to train and test, normalise values to radiologic Hounsfield Units on-the-fly, resize images (to equalise spacing along each axes for different patients), create masks and crop 3D patches of size (32, 64, 64) around nodules.

3.1 Train any model you like with usual loop

You can train any model in a usual loop, for example with TensorFlow. Suppose, you have defined model session, just like in this oversimplified example:

Then the loop would be directly feeding images and masks to feed_dict param of

3.2 Train any model you like (e.g. V-Net) with training pipeline

However sometimes it is inconvenient and bulky, for TensorFlow and Keras training via pipelines is already implemented. Consider, that we decided to train V-Net (see dataset documentation for model building reference). We need to add .init_model and .train_model methods to our preprocessing pipeline.

After that, you can train model immediately by running pipeline:

You can see progress of network training after some iterations:

Dice score on train part of dataset

4. See results of your network

If you want to see how the model predict on the whole patient’s scan, hold on, it’s also super_easy with predict_on_scan action:

From left-to-right: original scan, predicted mask, original mask

Now you can leave painful preprocessing to RadIO, write your own methods to be chained in readable and clear pipelines which will try to run in parallel and preprocess whole batch at once. You can take maximum from your models for running reproducible experiments.

How to easily train a 3D U-Net or any other model for lung cancer segmentation. was originally published in Data Analysis Center on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source: Deep Learning on Medium