Improving the Quality of Synthetic FLAIR Images with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-Pixel Image Translation

Editor’s Choice

Forty patients with MS were prospectively included and scanned (3T) to acquire synthetic MR imaging and conventional FLAIR images. Synthetic FLAIR images were created with the SyMRI software. Acquired data were divided into 30 training and 10 test datasets. A conditional generative adversarial network was trained to generate improved FLAIR images from raw synthetic MR imaging data using conventional FLAIR images as targets. The peak signal-to-noise ratio, normalized root mean square error, and the Dice index of MS lesion maps were calculated for synthetic and deep learning FLAIR images against conventional FLAIR images, respectively. Lesion conspicuity and the existence of artifacts were visually assessed. The peak signal-to-noise ratio and normalized root mean square error were significantly higher and lower, respectively, in generated-versus-synthetic FLAIR images in aggregate intracranial tissues and all tissue segments. The Dice index of lesion maps and visual lesion conspicuity were comparable between generated and synthetic FLAIR images. Using deep learning, the authors conclude that they improved the synthetic FLAIR image quality by generating FLAIR images that have contrast closer to that of conventional FLAIR images and fewer granular and swelling artifacts, while preserving the lesion contrast.

Abstract

BACKGROUND AND PURPOSE

Magnified images from Fig 2. Synthetic FLAIR (A), DL-FLAIR (B), and conventional FLAIR (C) images are shown. Sulci are wider in B and C in some areas than they are in A (white arrows). However, for areas with tight sulci on the conventional FLAIR image (C), the sulci are tighter and more hyperintense on both synthetic FLAIR (A) and DL-FLAIR (B) images than they are on the conventional FLAIR image (black arrows).
Magnified images from Fig 2. Synthetic FLAIR (A), DL-FLAIR (B), and conventional FLAIR (C) images are shown. Sulci are wider in B and C in some areas than they are in A (white arrows). However, for areas with tight sulci on the conventional FLAIR image (C), the sulci are tighter and more hyperintense on both synthetic FLAIR (A) and DL-FLAIR (B) images than they are on the conventional FLAIR image (black arrows).

Synthetic FLAIR images are of lower quality than conventional FLAIR images. Here, we aimed to improve the synthetic FLAIR image quality using deep learning with pixel-by-pixel translation through conditional generative adversarial network training.

MATERIALS AND METHODS

Forty patients with MS were prospectively included and scanned (3T) to acquire synthetic MR imaging and conventional FLAIR images. Synthetic FLAIR images were created with the SyMRI software. Acquired data were divided into 30 training and 10 test datasets. A conditional generative adversarial network was trained to generate improved FLAIR images from raw synthetic MR imaging data using conventional FLAIR images as targets. The peak signal-to-noise ratio, normalized root mean square error, and the Dice index of MS lesion maps were calculated for synthetic and deep learning FLAIR images against conventional FLAIR images, respectively. Lesion conspicuity and the existence of artifacts were visually assessed.

RESULTS

The peak signal-to-noise ratio and normalized root mean square error were significantly higher and lower, respectively, in generated-versus-synthetic FLAIR images in aggregate intracranial tissues and all tissue segments (all P < .001). The Dice index of lesion maps and visual lesion conspicuity were comparable between generated and synthetic FLAIR images (P = 1 and .59, respectively). Generated FLAIR images showed fewer granular artifacts (P = .003) and swelling artifacts (in all cases) than synthetic FLAIR images.

CONCLUSIONS

Using deep learning, we improved the synthetic FLAIR image quality by generating FLAIR images that have contrast closer to that of conventional FLAIR images and fewer granular and swelling artifacts, while preserving the lesion contrast.

Read this article: http://bit.ly/2MPmlOH

Improving the Quality of Synthetic FLAIR Images with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-Pixel Image Translation
Tags:
Jeffrey Ross
Fatal error: Uncaught Error: Call to undefined function get_cimyFieldValue() in /home2/ajnrblog/public_html/wp-content/themes/ample-child/author-bio.php:13 Stack trace: #0 /home2/ajnrblog/public_html/wp-content/themes/ample-child/content-single.php(35): include() #1 /home2/ajnrblog/public_html/wp-includes/template.php(812): require('/home2/ajnrblog...') #2 /home2/ajnrblog/public_html/wp-includes/template.php(745): load_template('/home2/ajnrblog...', false, Array) #3 /home2/ajnrblog/public_html/wp-includes/general-template.php(206): locate_template(Array, true, false, Array) #4 /home2/ajnrblog/public_html/wp-content/themes/ample/single.php(21): get_template_part('content', 'single') #5 /home2/ajnrblog/public_html/wp-includes/template-loader.php(106): include('/home2/ajnrblog...') #6 /home2/ajnrblog/public_html/wp-blog-header.php(19): require_once('/home2/ajnrblog...') #7 /home2/ajnrblog/public_html/index.php(17): require('/home2/ajnrblog...') #8 {main} thrown in /home2/ajnrblog/public_html/wp-content/themes/ample-child/author-bio.php on line 13