Back to Search Start Over

A 3D Omnidirectional Sensor For Mobile Robot Applications

Authors :
Xavier Savatier
Belahcene Mazari
Rémi Boutteau
Jean-Yves Ertaud
Pôle Instrumentation, Informatique et Systèmes
Institut de Recherche en Systèmes Electroniques Embarqués (IRSEEM)
Université de Rouen Normandie (UNIROUEN)
Normandie Université (NU)-Normandie Université (NU)-École Supérieure d’Ingénieurs en Génie Électrique (ESIGELEC)-Université de Rouen Normandie (UNIROUEN)
Normandie Université (NU)-Normandie Université (NU)-École Supérieure d’Ingénieurs en Génie Électrique (ESIGELEC)
Boutteau, Rémi
Source :
Mobile Robots Navigation, Mobile Robots Navigation, 2010
Publication Year :
2021
Publisher :
IntechOpen, 2021.

Abstract

In most of the missions a mobile robot has to achieve – intervention in hostile environments, preparation of military intervention, mapping, etc – two main tasks have to be completed: navigation and 3D environment perception. Therefore, vision based solutions have been widely used in autonomous robotics because they provide a large amount of information useful for detection, tracking, pattern recognition and scene understanding. Nevertheless, the main limitations of this kind of system are the limited field of view and the loss of the depth perception. A 360-degree field of view offers many advantages for navigation such as easiest motion estimation using specific properties on optical flow (Mouaddib, 2005) and more robust feature extraction and tracking. The interest for omnidirectional vision has therefore been growing up significantly over the past few years and several methods are being explored to obtain a panoramic image: rotating cameras (Benosman & Devars, 1998), muti-camera systems and catadioptric sensors (Baker & Nayar, 1999). Catadioptric sensors, i.e. the combination of a camera and a mirror with revolution shape, are nevertheless the only system that can provide a panoramic image instantaneously without moving parts, and are thus well-adapted for mobile robot applications. The depth perception can be retrieved using a set of images taken from at least two different viewpoints either by moving the camera or by using several cameras at different positions. The use of the camera motion to recover the geometrical structure of the scene and the camera’s positions is known as Structure From Motion (SFM). Excellent results have been obtained during the last years with SFM approaches (Pollefeys et al., 2004; Nister, 2001), but with off-line algorithms that need to process all the images simultaneous. SFM is consequently not well-adapted to the exploration of an unknown environment because the robot needs to build the map and to localize itself in this map during its world exploration. The in-line approach, known as SLAM (Simultaneous Localization and Mapping), is one of the most active research areas in robotics since it can provide a real autonomy to a mobile robot. Some interesting results have been obtained in the last few years but principally to build 2D maps of indoor environments using laser range-finders. A survey of these algorithms can be found in the tutorials of Durrant-Whyte and Bailey (Durrant-Whyte & Bailey, 2006; Bailey & Durrant-Whyte, 2006). 1

Details

Language :
English
Database :
OpenAIRE
Journal :
Mobile Robots Navigation, Mobile Robots Navigation, 2010
Accession number :
edsair.doi.dedup.....2d429f76f6548f9f68d3511ab8dd3e4d