1. A Combined Approach Toward Consistent Reconstructions of Indoor Spaces Based on 6D RGB-D Odometry and KinectFusion
- Author
-
Nadia Figueroa, Abdulmotaleb El Saddik, and Haiwei Dong
- Subjects
FOS: Computer and information sciences ,Visual Odometry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Theoretical Computer Science ,Image (mathematics) ,Odometry ,Artificial Intelligence ,RGB-D Sensing ,Polygon mesh ,Computer vision ,Visual odometry ,3D Mapping ,benchmark datasets ,ComputingMethodologies_COMPUTERGRAPHICS ,evaluation ,business.industry ,Indoor mapping ,kinect ,Iterative closest point ,Multimedia (cs.MM) ,Benchmark (computing) ,Fuse (electrical) ,RGB color model ,Artificial intelligence ,business ,Computer Science - Multimedia - Abstract
We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative Closest Point) to fine-tune the frame-to-frame relative pose and fuse the depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Moreover, our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.
- Published
- 2022