Back to Search Start Over

Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors

Authors :
Chen, Guang
Cao, Hu
Ye, Canbo
Zhang, Zhenyan
Liu, Xingbo
Mo, Xuhui
Qu, Zhongnan
Conradt, Jörg
Roehrbein, Florian
Knoll, Alois
Chen, Guang
Cao, Hu
Ye, Canbo
Zhang, Zhenyan
Liu, Xingbo
Mo, Xuhui
Qu, Zhongnan
Conradt, Jörg
Roehrbein, Florian
Knoll, Alois
Publication Year :
2019

Abstract

Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.<br />QC 20190724

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1234952510
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.3389.fnbot.2019.00010