Back to Search
Start Over
Directly training temporal Spiking Neural Network with sparse surrogate gradient.
- Source :
-
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2024 Nov; Vol. 179, pp. 106499. Date of Electronic Publication: 2024 Jul 01. - Publication Year :
- 2024
-
Abstract
- Brain-inspired Spiking Neural Networks (SNNs) have attracted much attention due to their event-based computing and energy-efficient features. However, the spiking all-or-none nature has prevented direct training of SNNs for various applications. The surrogate gradient (SG) algorithm has recently enabled spiking neural networks to shine in neuromorphic hardware. However, introducing surrogate gradients has caused SNNs to lose their original sparsity, thus leading to the potential performance loss. In this paper, we first analyze the current problem of direct training using SGs and then propose Masked Surrogate Gradients (MSGs) to balance the effectiveness of training and the sparseness of the gradient, thereby improving the generalization ability of SNNs. Moreover, we introduce a temporally weighted output (TWO) method to decode the network output, reinforcing the importance of correct timesteps. Extensive experiments on diverse network structures and datasets show that training with MSG and TWO surpasses the SOTA technique.<br />Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.<br /> (Copyright © 2024 Elsevier Ltd. All rights reserved.)
Details
- Language :
- English
- ISSN :
- 1879-2782
- Volume :
- 179
- Database :
- MEDLINE
- Journal :
- Neural networks : the official journal of the International Neural Network Society
- Publication Type :
- Academic Journal
- Accession number :
- 39013289
- Full Text :
- https://doi.org/10.1016/j.neunet.2024.106499