Back to Search
Start Over
Instance-wise Depth and Motion Learning from Monocular Videos
- Publication Year :
- 2019
-
Abstract
- We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision. Our technical contributions are three-fold. First, we propose a differentiable forward rigid projection module that plays a key role in our instance-wise depth and motion learning. Second, we design an instance-wise photometric and geometric consistency loss that effectively decomposes background and moving object regions. Lastly, we introduce a new auto-annotation scheme to produce video instance segmentation maps that will be utilized as input to our training pipeline. These proposed elements are validated in a detailed ablation study. Through extensive experiments conducted on the KITTI dataset, our framework is shown to outperform the state-of-the-art depth and motion estimation methods. Our code and dataset will be available at https://github.com/SeokjuLee/Insta-DM.<br />Comment: Project page at https://sites.google.com/site/seokjucv/home/instadm
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1912.09351
- Document Type :
- Working Paper