Back to Search
Start Over
Chip-In-Loop SNN Proxy Learning: a new method for efficient training of spiking neural networks.
- Source :
- Frontiers in Neuroscience; 2024, p1-8, 8p
- Publication Year :
- 2024
-
Abstract
- The primary approaches used to train spiking neural networks (SNNs) involve either training artificial neural networks (ANNs) first and then transforming them into SNNs, or directly training SNNs using surrogate gradient techniques. Nevertheless, both of these methods encounter a shared challenge: they rely on frame-based methodologies, where asynchronous events are gathered into synchronous frames for computation. This strays from the authentic asynchronous, event-driven nature of SNNs, resulting in notable performance degradation when deploying the trained models on SNN simulators or hardware chips for real-time asynchronous computation. To eliminate this performance degradation, we propose a hardware-based SNN proxy learning method that is called Chip-In-Loop SNN Proxy Learning (CIL-SPL). This approach eectively eliminates the performance degradation caused by the mismatch between synchronous and asynchronous computations. To demonstrate the eectiveness of our method, we trained models using public datasets such as N-MNIST and tested them on the SNN simulator or hardware chip, comparing our results to those classical training method [ABSTRACT FROM AUTHOR]
- Subjects :
- ARTIFICIAL neural networks
Subjects
Details
- Language :
- English
- ISSN :
- 16624548
- Database :
- Complementary Index
- Journal :
- Frontiers in Neuroscience
- Publication Type :
- Academic Journal
- Accession number :
- 174849512
- Full Text :
- https://doi.org/10.3389/fnins.2023.1323121