Back to Search
Start Over
Learning Neural Light Fields with Ray-Space Embedding Networks
- Publication Year :
- 2021
-
Abstract
- Neural radiance fields (NeRFs) produce state-of-the-art view synthesis results. However, they are slow to render, requiring hundreds of network evaluations per pixel to approximate a volume rendering integral. Baking NeRFs into explicit data structures enables efficient rendering, but results in a large increase in memory footprint and, in many cases, a quality reduction. In this paper, we propose a novel neural light field representation that, in contrast, is compact and directly predicts integrated radiance along rays. Our method supports rendering with a single network evaluation per pixel for small baseline light field datasets and can also be applied to larger baselines with only a few evaluations per pixel. At the core of our approach is a ray-space embedding network that maps the 4D ray-space manifold into an intermediate, interpolable latent space. Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset. In addition, for forward-facing scenes with sparser inputs we achieve results that are competitive with NeRF-based approaches in terms of quality while providing a better speed/quality/memory trade-off with far fewer network evaluations.<br />Comment: CVPR 2022 camera ready revision. Major changes include: 1. Additional comparison to NeX on Stanford, RealFF, Shiny datasets 2. Experiment on 360 degree lego bulldozer scene in the appendix, using Pluecker parameterization 3. Moving student-teacher results to the appendix 4. Clarity edits -- in particular, making it clear that our Stanford evaluation *does not* use subdivision
- Subjects :
- Computer Science - Computer Vision and Pattern Recognition
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2112.01523
- Document Type :
- Working Paper