101. Efficiently enhancing co-occurring details while avoiding artifacts for light field display
- Author
-
Yan Zhao, Jian Wei, Shigang Wang, Mei-Lan Piao, and Chenxi Song
- Subjects
Computer science ,business.industry ,Image quality ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,010309 optics ,Optics ,Region of interest ,0103 physical sciences ,Human visual system model ,Contrast (vision) ,Computer vision ,Artificial intelligence ,Noise (video) ,Electrical and Electronic Engineering ,business ,Engineering (miscellaneous) ,Light field ,media_common - Abstract
The ability of the human visual system (HVS) to perceive a three-dimensional (3D) image at once is finite, but the detail contrast of the light field display (LFD) is typically degraded during both acquisition and imaging stages. It is consequently difficult for viewers to rapidly find a region of interest from the displayed 3D scene. Existing image detail boosting solutions suffer from noise amplification, over-exaggeration, angular variations, or heavy computational burden. In this paper, we propose a selective enhancement method for the captured light field image (LFI) that empowers an attention-guiding LFD. It is based on the fact that the visually salient details within a LFI normally co-occur frequently in both spatial and angular domains. These co-occurrence statistics are effectively exploited. Experimental results show that the LFDs improved by our efficient method are free of undesirable artifacts and robust to disparity errors while retaining correct parallaxes and occlusion relationships, thus reducing HVS’s efforts to cognitively process 3D images. Our work is, to the best of our knowledge, the first in-depth research on computational and content-aware LFD contrast editing, and is expected to facilitate numerous LFD-based applications.
- Published
- 2020