Back to Search Start Over

Robustifying semantic cognition of traversability across wearable RGB-depth cameras

Authors :
Luis M. Bergasa
Eduardo Romera
Kaiwei Wang
Kailun Yang
Universidad de Alcalá. Departamento de Electrónica
Source :
e_Buah Biblioteca Digital Universidad de Alcalá, instname
Publication Year :
2019
Publisher :
The Optical Society, 2019.

Abstract

Semantic segmentation represents a promising means to unify different detection tasks, especially pixelwise traversability perception as the fundamental enabler in robotic vision systems aiding upper-level navigational applications. However, major research efforts are being put into earning marginal accuracy increments on semantic segmentation benchmarks, without assuring the robustness of real-time segmenters to be deployed in assistive cognition systems for the visually impaired. In this paper, we explore in a comparative study across four perception systems, including a pair of commercial smart glasses, a customized wearable prototype and two portable RGB-Depth (RGB-D) cameras that are being integrated in the next generation of navigation assistance devices. More concretely, we analyze the gap between the concepts of “accuracy” and “robustness” on the critical traversability-related semantic scene understanding. A cluster of efficient deep architectures is proposed, which are built using spatial factorizations, hierarchical dilations and pyramidal representations. Based on these architectures, this research demonstrates the augmented robustness of semantically traversable area parsing against the variations of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination, imaging quality, field of view and detectable depth range.<br />Ministerio de Economía y Competitividad<br />Comunidad de Madrid<br />Dirección General de Tráfico

Details

ISSN :
21553165 and 1559128X
Volume :
58
Database :
OpenAIRE
Journal :
Applied Optics
Accession number :
edsair.doi.dedup.....782e5f679c3c01308ea47e3849c0494e