Back to Search
Start Over
R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors
- Publication Year :
- 2021
-
Abstract
- Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS). This paper studies the robustness of SNNs against adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel methodology for robustifying SNNs through efficient DVS-noise filtering. We are the first to generate adversarial attacks on DVS signals (i.e., frames of events in the spatio-temporal domain) and to apply noise filters for DVS sensors in the quest for defending against adversarial attacks. Our results show that the noise filters effectively prevent the SNNs from being fooled. The SNNs in our experiments provide more than 90% accuracy on the DVS-Gesture and NMNIST datasets under different adversarial threat models.<br />To appear at the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021). arXiv admin note: text overlap with arXiv:2107.00415
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Cryptography and Security
Filter
Spiking Neural Networks
SNNs
Deep Learning
Adversarial Attacks
Security
Robustness
Defense
Perturbation
Noise
Dynamic Vision Sensors
DVS
Neuromorphic
Event-Based
DVS-Gesture
NMNIST
Computer Science - Neural and Evolutionary Computing
Machine Learning (cs.LG)
ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS
Neural and Evolutionary Computing (cs.NE)
Cryptography and Security (cs.CR)
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....929b2effe3e63564e5a02a5effcbd82e