Foreground-Background Disentanglement based on Image and Feature Co-Learning for 3D-Aware Generative Models

Sanghyuk Lee, Daeha Kim, Byung Cheol Song

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Recently, studies on generative models using 3D information are active. GIRAFFE, one of the latest 3D-aware generative models, shows better feature disentanglement than existing generative models because it generates an image through volume rendering of independently formed 3D neural feature fields. However, GIRAFFE still suffers from an issue where foreground and background disentanglement is not smooth. In order to accomplish better disentanglement performance than GIRAFFE, we propose co-adversarial learning of the generative model at both image- and feature-levels. As a result of rich simulation experiments, the proposed generative model can produce photo-realistic images with only fewer parameters than existing 3D-aware generative models, along with excellent foreground-background disentanglement performance.

Original languageEnglish
Title of host publication2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350359855
DOIs
StatePublished - 2023
Event2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023 - Jeju, Korea, Republic of
Duration: 4 Dec 20237 Dec 2023

Publication series

Name2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023

Conference

Conference2023 IEEE International Conference on Visual Communications and Image Processing, VCIP 2023
Country/TerritoryKorea, Republic of
CityJeju
Period4/12/237/12/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Keywords

  • 3D-aware generative model
  • foreground-background disentanglement

Fingerprint

Dive into the research topics of 'Foreground-Background Disentanglement based on Image and Feature Co-Learning for 3D-Aware Generative Models'. Together they form a unique fingerprint.

Cite this