Back to Search Start Over

Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis

Authors :
Nguyen-Ha, Phong
Huynh, Lam
Rahtu, Esa
Matas, Jiri
Heikkila, Janne
Source :
IEEE Transactions on Pattern Analysis and Machine Intelligence; 2024, Vol. 46 Issue: 5 p2758-2769, 12p
Publication Year :
2024

Abstract

We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis. Recent generalizing view synthesis methods can render high-quality novel views using a set of nearby input views. However, the rendering speed is still slow due to the nature of uniformly-point sampling of neural radiance fields. Existing scene-specific methods can train and render novel views efficiently but can not generalize to unseen data. Our approach addresses the problems of fast and generalizing view synthesis by proposing two novel modules: a coarse radiance fields predictor and a convolutional-based neural renderer. This architecture infers consistent scene geometry based on the implicit neural fields and renders new views efficiently using a single GPU. We first train CG-NeRF on multiple 3D scenes of the DTU dataset, and the network can produce high-quality and accurate novel views on unseen real and synthetic data using only photometric losses. Moreover, our method can leverage a denser set of reference images of a single scene to produce accurate novel views without relying on additional explicit representations and still maintains the high-speed rendering of the pre-trained model. Experimental results show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.

Details

Language :
English
ISSN :
01628828
Volume :
46
Issue :
5
Database :
Supplemental Index
Journal :
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publication Type :
Periodical
Accession number :
ejs65979860
Full Text :
https://doi.org/10.1109/TPAMI.2023.3335311