1. R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors
- Author
-
Alberto Marchisio, Giacomo Pira, Maurizio Martina, Guido Masera, and Muhammad Shafique
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,Filter ,Spiking Neural Networks ,SNNs ,Deep Learning ,Adversarial Attacks ,Security ,Robustness ,Defense ,Perturbation ,Noise ,Dynamic Vision Sensors ,DVS ,Neuromorphic ,Event-Based ,DVS-Gesture ,NMNIST ,Computer Science - Neural and Evolutionary Computing ,Machine Learning (cs.LG) ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Neural and Evolutionary Computing (cs.NE) ,Cryptography and Security (cs.CR) - Abstract
Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS). This paper studies the robustness of SNNs against adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel methodology for robustifying SNNs through efficient DVS-noise filtering. We are the first to generate adversarial attacks on DVS signals (i.e., frames of events in the spatio-temporal domain) and to apply noise filters for DVS sensors in the quest for defending against adversarial attacks. Our results show that the noise filters effectively prevent the SNNs from being fooled. The SNNs in our experiments provide more than 90% accuracy on the DVS-Gesture and NMNIST datasets under different adversarial threat models., To appear at the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021). arXiv admin note: text overlap with arXiv:2107.00415
- Published
- 2021