1. Visual Perception of Real World Depth Map Resolution for Mixed Reality Rendering
- Author
-
Taeyhun Rhee, Lohit Petikam, and Andrew Chalmers
- Subjects
Visual perception ,Computer science ,business.industry ,Global illumination ,media_common.quotation_subject ,Fidelity ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Mixed reality ,Rendering (computer graphics) ,Visualization ,Depth map ,Compositing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Image resolution ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common - Abstract
Compositing virtual objects into photographs with known real world geometry is a common task in mixed reality (MR) applications. This geometry enables rendering of global illumination effects, such as mutual lighting, shadowing, and occlusions between the background photograph and virtual objects. Obtaining high fidelity geometric representations of the real world can be a costly procedure, and is often approximated with depth data. However, it is not clear how much fidelity the depth data should have in order to maintain high visual quality in MR rendering. in this paper, we investigate the relationship between real world depth fidelity and visual quality in MR rendering. We do this by conducting a series of user experiments that measure how seamlessly virtual objects are blended with the background under varying depth resolutions. We independently evaluate the noticeability of multiple composition artifacts that occur with approximate depth. Perceptual thresholds in depth resolution are then obtained for each artifact. The findings can be used to inform trade-off decisions for optimising depth acquisition pipelines in MR applications.
- Published
- 2018
- Full Text
- View/download PDF