Unpaired medical image colorization using generative adversarial network

Yihuai Liang, Dongho Lee, Yan Li, Byeong Seok Shin

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

We consider medical image transformation problems where a grayscale image is transformed into a color image. The colorized medical image should have the same features as the input image because extra synthesized features can increase the possibility of diagnostic errors. In this paper, to secure colorized medical images and improve the quality of synthesized images, as well as to leverage unpaired training image data, a colorization network is proposed based on the cycle generative adversarial network (CycleGAN) model, combining a perceptual loss function and a total variation (TV) loss function. Visual comparisons and experimental indicators from the NRMSE, PSNR, and SSIM metrics are used to evaluate the performance of the proposed method. The experimental results show that GAN-based style conversion can be applied to colorization of medical images. As well, the introduction of perceptual loss and TV loss can improve the quality of images produced as a result of colorization better than the result generated by only using the CycleGAN model.

Original languageEnglish
Pages (from-to)26669-26683
Number of pages15
JournalMultimedia Tools and Applications
Volume81
Issue number19
DOIs
StatePublished - Aug 2022

Bibliographical note

Publisher Copyright:
© 2021, The Author(s).

Keywords

  • Generative adversarial network
  • Medical image colorization
  • Perceptual loss
  • TV loss

Fingerprint

Dive into the research topics of 'Unpaired medical image colorization using generative adversarial network'. Together they form a unique fingerprint.

Cite this