Joint Light Field Spatial and Angular Super-Resolution from a Single Image

Andre Ivan, Williem, In Kyu Park

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Synthesizing a densely sampled light field from a single image is highly beneficial for many applications. Moreover, jointly solving both angular and spatial super-resolution problem also introduces new possibilities in light field imaging. The conventional method relies on physical-based rendering and a secondary network to solve the angular super-resolution problem. In addition, pixel-based loss limits the network capability to infer scene geometry globally. In this paper, we show that both super-resolution problems can be solved jointly from a single image by proposing a single end-to-end deep neural network that does not require a physical-based approach. Two novel loss functions based on known light field domain knowledge are proposed to enable the network to consider the relation between sub-aperture images. Experimental results show that the proposed model successfully synthesizes dense high resolution light field and it outperforms the state-of-the-art method in both quantitative and qualitative criteria. The method can be generalized to various scenes, rather than focusing on a particular subject. The synthesized light field can be used as if it has been captured by a light field camera, such as depth estimation and refocusing.

Original languageEnglish
Article number9119124
Pages (from-to)112562-112573
Number of pages12
JournalIEEE Access
Volume8
DOIs
StatePublished - 2020

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Deep neural network
  • light field
  • machine learning
  • super-resolution

Fingerprint

Dive into the research topics of 'Joint Light Field Spatial and Angular Super-Resolution from a Single Image'. Together they form a unique fingerprint.

Cite this