DAGAN: A Domain-Aware Method for Image-to-Image Translations

Xu Yin, Yan Li, Byeong Seok Shin

Research output: Contribution to journalArticlepeer-review

Abstract

The image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks - such as classification and image segmentation - and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is composed with background and instance domain and then fed them into different translation networks. Lastly, we integrated the translated domains into a complete image with smoothed labels to maintain realism. We examined the instance-aware framework on datasets generated by YOLO and confirmed that this is capable of generating images of equal or better diversity compared to current translation models.

Original languageEnglish
Article number9341907
JournalComplexity
Volume2020
DOIs
StatePublished - 2020

Bibliographical note

Publisher Copyright:
© 2020 Xu Yin et al.

Fingerprint

Dive into the research topics of 'DAGAN: A Domain-Aware Method for Image-to-Image Translations'. Together they form a unique fingerprint.

Cite this