Back to Search Start Over

Pointmap-Conditioned Diffusion for Consistent Novel View Synthesis

Pointmap-Conditioned Diffusion for Consistent Novel View Synthesis

Authors :
Nguyen, Thang-Anh-Quan
Piasco, Nathan
Roldão, Luis
Bennehar, Moussab
Tsishkou, Dzmitry
Caraffa, Laurent
Tarel, Jean-Philippe
Brémond, Roland
Publication Year :
2025

Abstract

In this paper, we present PointmapDiffusion, a novel framework for single-image novel view synthesis (NVS) that utilizes pre-trained 2D diffusion models. Our method is the first to leverage pointmaps (i.e. rasterized 3D scene coordinates) as a conditioning signal, capturing geometric prior from the reference images to guide the diffusion process. By embedding reference attention blocks and a ControlNet for pointmap features, our model balances between generative capability and geometric consistency, enabling accurate view synthesis across varying viewpoints. Extensive experiments on diverse real-world datasets demonstrate that PointmapDiffusion achieves high-quality, multi-view consistent results with significantly fewer trainable parameters compared to other baselines for single-image NVS tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.02913
Document Type :
Working Paper