Back to Search Start Over

Perceptually-guided deep neural networks for ego-action prediction: Object grasping

Authors :
Jenny Benois-Pineau
Aymar de Rugy
Jean-Philippe Domenger
Daniel Cattaert
Iván González-Díaz
Ministerio de Economía y Competitividad (España)
Laboratoire Bordelais de Recherche en Informatique (LaBRI)
Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)
Laboratoire de neurobiologie des réseaux (LNR)
Université Sciences et Technologies - Bordeaux 1-Centre National de la Recherche Scientifique (CNRS)
Institut de Neurosciences cognitives et intégratives d'Aquitaine (INCIA)
Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1-SFR Bordeaux Neurosciences-Centre National de la Recherche Scientifique (CNRS)
Source :
e-Archivo. Repositorio Institucional de la Universidad Carlos III de Madrid, instname, Pattern Recognition, Pattern Recognition, Elsevier, 2019, 88, pp.223-235. ⟨10.1016/j.patcog.2018.11.013⟩
Publication Year :
2019
Publisher :
Elsevier, 2019.

Abstract

We tackle the problem of predicting a grasping action in ego-centric video for the assistance to upper limb amputees. Our work is based on paradigms of neuroscience that state that human gaze expresses intention and anticipates actions. In our scenario, human gaze fixations are recorded by a glass-worn eye-tracker and then used to predict the grasping actions. We have studied two aspects of the problem: which object from a given taxonomy will be grasped, and when is the moment to trigger the grasping action. To recognize objects, we using gaze to guide Convolutional Neural Networks (CNN) to focus on an object-to-grasp area. However, the acquired sequence of fixations is noisy due to saccades toward distractors and visual fatigue, and gaze is not always reliably directed toward the object-of-interest. To deal with this challenge, we use video-level annotations indicating the object to be grasped and a weak loss in Deep CNNs. To detect a moment when a person will take an object we take advantage of the predictive power of Long-Short Term Memory networks to analyze gaze and visual dynamics. Results show that our method achieves better performance than other approaches on a real-life dataset. (C) 2018 Elsevier Ltd. All rights reserved. This work was partially supported by French National Center of Scientific research with grant Suvipp PEPS CNRS-Idex 215-2016, by French National Center of Scientific research with Interdisciplinary project CNRS RoBioVis 2017–2019, the Scientific Council of Labri, University of Bordeaux, and the Spanish Ministry of Economy and Competitiveness under the National Grants TEC2014-53390-P and TEC2014-61729-EXP. Publicado

Details

Language :
English
ISSN :
00313203
Database :
OpenAIRE
Journal :
e-Archivo. Repositorio Institucional de la Universidad Carlos III de Madrid, instname, Pattern Recognition, Pattern Recognition, Elsevier, 2019, 88, pp.223-235. ⟨10.1016/j.patcog.2018.11.013⟩
Accession number :
edsair.doi.dedup.....9d65807bc6d68eb862b4b2620d024c27
Full Text :
https://doi.org/10.1016/j.patcog.2018.11.013⟩