Abstract
The image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks - such as classification and image segmentation - and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is composed with background and instance domain and then fed them into different translation networks. Lastly, we integrated the translated domains into a complete image with smoothed labels to maintain realism. We examined the instance-aware framework on datasets generated by YOLO and confirmed that this is capable of generating images of equal or better diversity compared to current translation models.
Original language | English |
---|---|
Article number | 9341907 |
Journal | Complexity |
Volume | 2020 |
DOIs | |
State | Published - 2020 |
Bibliographical note
Publisher Copyright:© 2020 Xu Yin et al.