Back to Search Start Over

AIM 2024 Sparse Neural Rendering Challenge: Methods and Results

Authors :
Nazarczuk, Michal
Catley-Chandar, Sibi
Tanay, Thomas
Shaw, Richard
Pérez-Pellitero, Eduardo
Timofte, Radu
Yan, Xing
Wang, Pan
Guo, Yali
Wu, Yongxin
Cai, Youcheng
Yang, Yanan
Li, Junting
Zhou, Yanghong
Mok, P. Y.
He, Zongqi
Xiao, Zhe
Chan, Kin-Chung
Goshu, Hana Lebeta
Yang, Cuixin
Dong, Rongkang
Xiao, Jun
Lam, Kin-Man
Hao, Jiayao
Gao, Qiong
Zu, Yanyan
Zhang, Junpei
Jiao, Licheng
Liu, Xu
Purohit, Kuldeep
Publication Year :
2024

Abstract

This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. This manuscript focuses on the competition set-up, the proposed methods and their respective results. The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations. It is composed of two tracks, with differing levels of sparsity; 3 views in Track 1 (very sparse) and 9 views in Track 2 (sparse). Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric. For both tracks, we use the newly introduced Sparse Rendering (SpaRe) dataset and the popular DTU MVS dataset. In this challenge, 5 teams submitted final results to Track 1 and 4 teams submitted final results to Track 2. The submitted models are varied and push the boundaries of the current state-of-the-art in sparse neural rendering. A detailed description of all models developed in the challenge is provided in this paper.<br />Comment: Part of Advances in Image Manipulation workshop at ECCV 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.15045
Document Type :
Working Paper