How is GAN generated on medical images ?

Source: Deep Learning on Medium

How is GAN generated on medical images ?

Originally, GAN was proposed as an unsupervised (unconditional) generation framework: for example, in image synthesis, random noise is mapped to realistic target images.Later, CGAN is conditional GAN, and the prior information such as labels (or image features) is added to the input instead of relying only on noise. At this time, GAN can be regarded as a supervised (conditional) generation framework.The generative properties of the two frameworks have been used in various ways to synthesize certain types of medical images.

Image generation for unconditional GAN

A lot of work has recently appeared in the field of unsupervised medical image generation using GANs, which can solve problems such as data scarcity and class imbalances (Frid-Adar, 2018) and help to understand the nature of data distribution and Latent structure.Existing work shows that DCGAN can be used to synthesize realistic prostate lesions (Kitchen and Seah, 2017), retinal images (Schlegl, 2017) or lung cancer nodules (Chuquicusma, 2018). Synthetic lung nodules are indistinguishable from real ones, even for radiologists.Frid-Adar (2018) also used DCGAN to synthesize lesion plaques in different categories of liver CT: for each category, namely cysts, metastases, and hemangiomas, training independent generative models. Because the training data set is too small, they use a lot of enhanced data to train the GAN. The authors show that in addition to data augmentation, synthetic samples from GAN can improve CNN classifiers.Bermudez (2018) also showed that DCGAN can also generate MR data with a very high resolution, even with a small number of samples. After training for 1500 epochs, the author’s experiments achieved great generating results (the human eye cannot judge true and false images).

Baur (2018b) compared the effects of DCGAN and LAPGAN on skin lesion image synthesis. Due to the large variance of the training data, the number of samples is rarely enough to train a reliable DCGAN. However, the cascaded LAPGAN and its variants are well realized, and synthetic samples have also been successfully used to train skin lesion classifiers. Baur (2018a) uses progressive PGAN (Karras 2017) to synthesize high-resolution images of skin lesions with excellent results. Even professional dermatologists cannot distinguish whether they are synthetic or not.

Image generation for conditional GANs

1. Generate CT from MR images

CT images are acquired in many clinical settings, but CT imaging puts patients at risk for cell damage and cancer radiation. This prompted us to try to synthesize CT images by MR. Nie (2017) uses a cascaded 3D full convolutional network to synthesize CT images from corresponding MR images. In order to improve the authenticity of synthetic CT images, in addition to adversarial training, they also trained models by pixel-by-pixel reconstruction loss and image gradient loss. Nie (2017) requires training on a one-to-one correspondence between CT and MR images.

Wolterink (2017a) used cycleGAN to convert 2D MR images to CT images without the need for matching image pair training. And because the paired training data sets are not perfectly matched, their training is not affected by this, and it even brings better results. Zhao (2018a) ‘s Deep-supGAN maps the 3D MR data of the head to its CT image to facilitate the segmentation of the cranio-maxillofacial bone structure. In order to obtain better conversion results, they proposed a “deep-supervision discriminator”, similar to “perceptual loss”, using feature representations extracted from pre-trained VGG16 models to distinguish between real and synthetic CT images and Provides gradient updates to the generator.

2. Generate MR from CT image

Similar to (Wolterink, 2017a), Chartsias (2017) uses cycleGANs for unpaired image-to-image conversion, generating “cardiac MR images and segmentation masks” from “cardiac CT slice and segmentation images”. The authors show that when additional training is performed on the model using synthetic data, the performance of the segmentation model can be improved by 16%; models trained on synthetic data are only 5% worse than models trained on real data.

Cohen (2018) pointed out that it is difficult to retain the features of the tumor / lesion part during image-to-image conversion. To this end, Jiang (2018) proposed a “tumor perception” loss function for cycleGAN to better synthesize MR images from CT images.

3. Synthesis of CT images from CT images

PET images are often used for oncology diagnosis and staging. The combined acquisition of PET and anatomical CT images is a standard procedure in routine clinical operations. But PET equipment is expensive and involves radioactivity. Therefore, the medical image analysis community has been working to synthesize PET images directly from CT data.Ben-Cohen (2017) used conditional GAN ​​to synthesize liver PET images from CT data, but its performance was poor in “underrepresented” tumor regions. In contrast, FCN networks are able to synthesize tumors, but usually produce blurred images. By mixing the corresponding synthetic PET images from the conditional GAN ​​and FCN, they can achieve very high tumor detection performance.

Similarly, Bi (2017) synthesized high-resolution PET images from paired CT images and binary label images. The author emphasizes that adding a label label map will bring a more realistic global synthesis effect, and the tumor detection model trained on the synthetic data validates their synthetic PET image, and obtains results comparable to the model trained on the real data. They consider synthetic data to be useful when labeled data is scarce.

4. Synthesis of PET images from MRI images

Measuring myelin content in PET images of human brains is important for monitoring disease progression, understanding physiopathology, and evaluating treatment for multiple sclerosis (MS). But PET imaging for MS is expensive and requires radiotracer injections. Wei (2018) used a cascade of two conditional GANs, a 3D U-Net-based generator and a 3D CNN discriminator to synthesize PET images from MR. The authors believe that a single cGAN will produce blurred images. Breaking down synthesis tasks into smaller and more stable sub-problems can improve results.

Conclusion

For unconditional and conditional image generation, there have been many GAN-based methods. But how effective are these methods? There is still a lack of a meaningful and universal quantification method to judge the authenticity of the composite image. Nevertheless, the above work shows that GAN seems to be successfully used for data simulation and augmentation in classification and segmentation tasks.