TY - JOUR
T1 - Display Visibility Improvement Through Content and Ambient Light-Adaptive Image Enhancement
AU - Lee, Junmin
AU - Lee, Heejin
AU - Lee, Seunghyun
AU - Heo, Junho
AU - Lee, Jiwon
AU - Song, Byung Cheol
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - An image in a display device under strong illuminance can be perceived as darker than the original due to the nature of the human visual system (HVS). In order to alleviate this degradation in terms of software, existing schemes employ global luminance compensation or tone mapping. However, since such approaches focus on restoring luminance only, it has a fundamental drawback that chrominance cannot be sufficiently restored. Also, the previous approaches seldom provide acceptable visibility because it does not consider the content of an input image. Furthermore, because they focus mainly on global image quality, they may show unsatisfactory image quality for certain local areas. This paper introduces VisibilityNet, a neural network model designed to restore both chrominance and luminance. By leveraging VisibilityNet, we generate an optimally enhanced dataset tailored to the ambient light conditions. Furthermore, employing the generated dataset and a convolutional neural network (CNN), we estimate weighted piece-wise linear enhancement curves (WPLECs) that take into account both ambient light and image content. These WPLECs effectively enhance global contrast by addressing both luminance and chrominance aspects. Ultimately, through the utilization of a salient object detection algorithm that emulates the HVS, visibility enhancement is achieved not only for the overall region but also for visually salient areas. We verified the performance of the proposed method by comparing it with five existing approaches in terms of two quantitative metrics for a dataset we built ourselves. Experimental findings substantiate that the proposed method surpasses alternative approaches by significantly improving visibility.
AB - An image in a display device under strong illuminance can be perceived as darker than the original due to the nature of the human visual system (HVS). In order to alleviate this degradation in terms of software, existing schemes employ global luminance compensation or tone mapping. However, since such approaches focus on restoring luminance only, it has a fundamental drawback that chrominance cannot be sufficiently restored. Also, the previous approaches seldom provide acceptable visibility because it does not consider the content of an input image. Furthermore, because they focus mainly on global image quality, they may show unsatisfactory image quality for certain local areas. This paper introduces VisibilityNet, a neural network model designed to restore both chrominance and luminance. By leveraging VisibilityNet, we generate an optimally enhanced dataset tailored to the ambient light conditions. Furthermore, employing the generated dataset and a convolutional neural network (CNN), we estimate weighted piece-wise linear enhancement curves (WPLECs) that take into account both ambient light and image content. These WPLECs effectively enhance global contrast by addressing both luminance and chrominance aspects. Ultimately, through the utilization of a salient object detection algorithm that emulates the HVS, visibility enhancement is achieved not only for the overall region but also for visually salient areas. We verified the performance of the proposed method by comparing it with five existing approaches in terms of two quantitative metrics for a dataset we built ourselves. Experimental findings substantiate that the proposed method surpasses alternative approaches by significantly improving visibility.
KW - Visibility improvement
KW - ambient light
KW - piece-wise linear curve
KW - salient object enhancement
UR - http://www.scopus.com/inward/record.url?scp=85168299841&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3305680
DO - 10.1109/ACCESS.2023.3305680
M3 - Article
AN - SCOPUS:85168299841
SN - 2169-3536
VL - 11
SP - 87902
EP - 87916
JO - IEEE Access
JF - IEEE Access
ER -