1. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
- Author
-
Yang Gao, Yan Li, Qingwu Hu, and Meng Wu
- Subjects
Computer science ,imaging sensor ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,02 engineering and technology ,image matching ,lcsh:Chemical technology ,Biochemistry ,Article ,image retrieval ,Analytical Chemistry ,Inertial measurement unit ,Dead reckoning ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,Computer vision ,Electrical and Electronic Engineering ,Image sensor ,Instrumentation ,Image retrieval ,vision navigation ,geo-referenced ,image database ,multiple sensor-integrated mobile mapping ,business.industry ,020206 networking & telecommunications ,Atomic and Molecular Physics, and Optics ,Mobile robot navigation ,Global Positioning System ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Mobile mapping - Abstract
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
- Published
- 2016