Back to Search Start Over

Auto-Spikformer: Spikformer Architecture Search

Authors :
Che, Kaiwei
Zhou, Zhaokun
Ma, Zhengyu
Fang, Wei
Chen, Yanqi
Shen, Shuaijie
Yuan, Li
Tian, Yonghong
Che, Kaiwei
Zhou, Zhaokun
Ma, Zhengyu
Fang, Wei
Chen, Yanqi
Shen, Shuaijie
Yuan, Li
Tian, Yonghong
Publication Year :
2023

Abstract

The integration of self-attention mechanisms into Spiking Neural Networks (SNNs) has garnered considerable interest in the realm of advanced deep learning, primarily due to their biological properties. Recent advancements in SNN architecture, such as Spikformer, have demonstrated promising outcomes by leveraging Spiking Self-Attention (SSA) and Spiking Patch Splitting (SPS) modules. However, we observe that Spikformer may exhibit excessive energy consumption, potentially attributable to redundant channels and blocks. To mitigate this issue, we propose Auto-Spikformer, a one-shot Transformer Architecture Search (TAS) method, which automates the quest for an optimized Spikformer architecture. To facilitate the search process, we propose methods Evolutionary SNN neurons (ESNN), which optimizes the SNN parameters, and apply the previous method of weight entanglement supernet training, which optimizes the Vision Transformer (ViT) parameters. Moreover, we propose an accuracy and energy balanced fitness function $\mathcal{F}_{AEB}$ that jointly considers both energy consumption and accuracy, and aims to find a Pareto optimal combination that balances these two objectives. Our experimental results demonstrate the effectiveness of Auto-Spikformer, which outperforms the state-of-the-art method including CNN or ViT models that are manually or automatically designed while significantly reducing energy consumption.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381632499
Document Type :
Electronic Resource