1. Neural Lumigraph Rendering
- Author
-
Ryan Spicer, Kari Pulli, Andrew Jones, Petr Kellnhofer, Lars C. Jebe, and Gordon Wetzstein
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Image quality ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volume rendering ,Facial recognition system ,Graphics pipeline ,Graphics (cs.GR) ,Rendering (computer graphics) ,View synthesis ,Computer Science - Graphics ,Face (geometry) ,Computer vision ,Artificial intelligence ,Graphics ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Novel view synthesis is a challenging and ill-posed inverse rendering problem. Neural rendering techniques have recently achieved photorealistic image quality for this task. State-of-the-art (SOTA) neural volume rendering approaches, however, are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions. We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images. Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information. Thus, like other implicit surface representations, ours is compatible with traditional graphics pipelines, enabling real-time rendering rates, while achieving unprecedented image quality compared to other surface methods. We assess the quality of our approach using existing datasets as well as high-quality 3D face data captured with a custom multi-camera rig., Comment: Project website: http://www.computationalimaging.org/publications/nlr/
- Published
- 2021
- Full Text
- View/download PDF