Back to Search Start Over

IGAF: Incremental Guided Attention Fusion for Depth Super-Resolution

Authors :
Tragakis, Athanasios
Kaul, Chaitanya
Mitchell, Kevin J.
Dai, Hang
Murray-Smith, Roderick
Faccio, Daniele
Source :
Sensors 2025, 25, 24
Publication Year :
2025

Abstract

Accurate depth estimation is crucial for many fields, including robotics, navigation, and medical imaging. However, conventional depth sensors often produce low-resolution (LR) depth maps, making detailed scene perception challenging. To address this, enhancing LR depth maps to high-resolution (HR) ones has become essential, guided by HR-structured inputs like RGB or grayscale images. We propose a novel sensor fusion methodology for guided depth super-resolution (GDSR), a technique that combines LR depth maps with HR images to estimate detailed HR depth maps. Our key contribution is the Incremental guided attention fusion (IGAF) module, which effectively learns to fuse features from RGB images and LR depth maps, producing accurate HR depth maps. Using IGAF, we build a robust super-resolution model and evaluate it on multiple benchmark datasets. Our model achieves state-of-the-art results compared to all baseline models on the NYU v2 dataset for $\times 4$, $\times 8$, and $\times 16$ upsampling. It also outperforms all baselines in a zero-shot setting on the Middlebury, Lu, and RGB-D-D datasets. Code, environments, and models are available on GitHub.

Details

Database :
arXiv
Journal :
Sensors 2025, 25, 24
Publication Type :
Report
Accession number :
edsarx.2501.01723
Document Type :
Working Paper
Full Text :
https://doi.org/10.3390/s25010024