Back to Search
Start Over
Learning Domain Invariant Features for Unsupervised Indoor Depth Estimation Adaptation.
- Source :
- ACM Transactions on Multimedia Computing, Communications & Applications; Sep2024, Vol. 20 Issue 9, p1-23, 23p
- Publication Year :
- 2024
-
Abstract
- Predicting depth maps from monocular images has made an impressive performance in the past years. However, most depth estimation methods are trained with paired image-depth map data or multi-view images (e.g., stereo pair and monocular sequence), which suffer from expensive annotation costs and poor transferability. Although unsupervised domain adaptation methods are introduced to mitigate the reliance on annotated data, rare works focus on the unsupervised cross-scenario indoor monocular depth estimation. In this article, we propose to study the generalization of depth estimation models across different indoor scenarios in an adversarial-based domain adaptation paradigm. Concretely, a domain discriminator is designed for discriminating the representation from source and target domains, while the feature extractor aims to confuse the domain discriminator by capturing domain-invariant features. Further, we reconstruct depth maps from latent representations with the supervision of labeled source data. As a result, the feature extractor learned features possess the merit of both domain-invariant and low source risk, and the depth estimator can deal with the domain shift between source and target domains. We conduct the cross-scenario and cross-dataset experiments on the ScanNet and NYU-Depth-v2 datasets to verify the effectiveness of our method and achieve impressive performance. [ABSTRACT FROM AUTHOR]
- Subjects :
- MONOCULARS
DATA mapping
GENERALIZATION
ANNOTATIONS
COST
Subjects
Details
- Language :
- English
- ISSN :
- 15516857
- Volume :
- 20
- Issue :
- 9
- Database :
- Complementary Index
- Journal :
- ACM Transactions on Multimedia Computing, Communications & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 179790514
- Full Text :
- https://doi.org/10.1145/3672397