Back to Search Start Over

BiFuse++: Self-Supervised and Efficient Bi-Projection Fusion for 360° Depth Estimation

Authors :
Wang, Fu-En
Yeh, Yu-Hsuan
Tsai, Yi-Hsuan
Chiu, Wei-Chen
Sun, Min
Source :
IEEE Transactions on Pattern Analysis and Machine Intelligence; 2023, Vol. 45 Issue: 5 p5448-5460, 13p
Publication Year :
2023

Abstract

Due to the rise of spherical cameras, monocular 360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="wang-ieq1-3203516.gif"/></alternatives></inline-formula> depth estimation becomes an important technique for many applications (e.g., autonomous systems). Thus, state-of-the-art frameworks for monocular 360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="wang-ieq2-3203516.gif"/></alternatives></inline-formula> depth estimation such as bi-projection fusion in BiFuse are proposed. To train such a framework, a large number of panoramas along with the corresponding depth ground truths captured by laser sensors are required, which highly increases the cost of data collection. Moreover, since such a data collection procedure is time-consuming, the scalability of extending these methods to different scenes becomes a challenge. To this end, self-training a network for monocular depth estimation from 360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>∘</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="wang-ieq3-3203516.gif"/></alternatives></inline-formula> videos is one way to alleviate this issue. However, there are no existing frameworks that incorporate bi-projection fusion into the self-training scheme, which highly limits the self-supervised performance since bi-projection fusion can leverage information from different projection types. In this paper, we propose BiFuse++ to explore the combination of bi-projection fusion and the self-training scenario. To be specific, we propose a new fusion module and Contrast-Aware Photometric Loss to improve the performance of BiFuse and increase the stability of self-training on real-world videos. We conduct both supervised and self-supervised experiments on benchmark datasets and achieve state-of-the-art performance.

Details

Language :
English
ISSN :
01628828
Volume :
45
Issue :
5
Database :
Supplemental Index
Journal :
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publication Type :
Periodical
Accession number :
ejs62728487
Full Text :
https://doi.org/10.1109/TPAMI.2022.3203516