Back to Search Start Over

Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras.

Authors :
Malek, Salim
Melgani, Farid
Mekhalfi, Mohamed Lamine
Bazi, Yakoub
Source :
Sensors (14248220). Nov2017, Vol. 17 Issue 11, p2641. 14p.
Publication Year :
2017

Abstract

This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14248220
Volume :
17
Issue :
11
Database :
Academic Search Index
Journal :
Sensors (14248220)
Publication Type :
Academic Journal
Accession number :
126442101
Full Text :
https://doi.org/10.3390/s17112641