Back to Search
Start Over
Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries
- Source :
- BMVC, Scopus-Elsevier
- Publication Year :
- 2014
- Publisher :
- British Machine Vision Association, 2014.
-
Abstract
- We present an algorithm to estimate depth in dynamic video scenes. We propose to learn and infer depth in videos from appearance, motion, occlusion boundaries, and geometric context of the scene. Using our method, depth can be estimated from unconstrained videos with no requirement of camera pose estimation, and with significant background/foreground motions. We start by decomposing a video into spatio-temporal regions. For each spatio-temporal region, we learn the relationship of depth to visual appearance, motion, and geometric classes. Then we infer the depth information of new scenes using piecewise planar parametrization estimated within a Markov random field (MRF) framework by combining appearance to depth learned mappings and occlusion boundary guided smoothness constraints. Subsequently, we perform temporal smoothing to obtain temporally consistent depth maps. To evaluate our depth estimation algorithm, we provide a novel dataset with ground truth depth for outdoor video scenes. We present a thorough evaluation of our algorithm on our new dataset and the publicly available Make3d static image dataset.<br />Comment: British Machine Vision Conference (BMVC) 2014
- Subjects :
- FOS: Computer and information sciences
Geometric context
Markov random field
business.industry
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
Computer Science::Computer Vision and Pattern Recognition
Computer Science::Multimedia
Occlusion
Extraction (military)
Computer vision
Artificial intelligence
business
Depth perception
ComputingMethodologies_COMPUTERGRAPHICS
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the British Machine Vision Conference 2014
- Accession number :
- edsair.doi.dedup.....58772dee2c6735164bd5f366ce30f3cf
- Full Text :
- https://doi.org/10.5244/c.28.10