Abstract
Recently, studies on generative models using 3D information are active. GIRAFFE, one of the latest 3D-aware generative models, shows better feature disentanglement than existing generative models because it generates an image through volume rendering of independently formed 3D neural feature fields. However, GIRAFFE still suffers from an issue where foreground and background disentanglement is not smooth. In order to accomplish better disentanglement performance than GIRAFFE, we propose co-adversarial learning of the generative model at both image- and feature-levels. As a result of rich simulation experiments, the proposed generative model can produce photo-realistic images with only fewer parameters than existing 3D-aware generative models, along with excellent foreground-background disentanglement performance.
Original language | English |
---|---|
Title of host publication | 2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9798350359855 |
DOIs | |
State | Published - 2023 |
Event | 2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023 - Jeju, Korea, Republic of Duration: 4 Dec 2023 → 7 Dec 2023 |
Publication series
Name | 2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023 |
---|
Conference
Conference | 2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023 |
---|---|
Country/Territory | Korea, Republic of |
City | Jeju |
Period | 4/12/23 → 7/12/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Keywords
- 3D-aware generative model
- foreground-background disentanglement