Abstract
Recently, various image-to-image translation (I2I) methods have improved mode diversity and visual quality in terms of neural networks or regularization terms. However, conventional I2I methods relies on a static decision boundary and the encoded representations in those methods are entangled with each other, so they often face with 'mode collapse' phenomenon. To mitigate mode collapse, 1) we design a so-called style-guided discriminator that guides an input image to the target image style based on the strategy of flexible decision boundary. 2) Also, we make the encoded representations include independent domain attributes. Based on two ideas, this paper proposes Style-Guided and Disentangled Representation for Robust Image-to-Image Translation (SRIT). SRIT showed outstanding FID by 8%, 22.8%, and 10.1% for CelebA-HQ, AFHQ, and Yosemite datasets, respectively. The translated images of SRIT reflect the styles of target domain successfully. This indicates that SRIT shows better mode diversity than previous works.
Original language | English |
---|---|
Title of host publication | AAAI-22 Technical Tracks 1 |
Publisher | Association for the Advancement of Artificial Intelligence |
Pages | 463-471 |
Number of pages | 9 |
ISBN (Electronic) | 1577358767, 9781577358763 |
State | Published - 30 Jun 2022 |
Event | 36th AAAI Conference on Artificial Intelligence, AAAI 2022 - Virtual, Online Duration: 22 Feb 2022 → 1 Mar 2022 |
Publication series
Name | Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 |
---|---|
Volume | 36 |
Conference
Conference | 36th AAAI Conference on Artificial Intelligence, AAAI 2022 |
---|---|
City | Virtual, Online |
Period | 22/02/22 → 1/03/22 |
Bibliographical note
Publisher Copyright:Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.