Back to Search Start Over

Energy-Efficient Spiking Segmenter for Frame and Event-Based Images.

Authors :
Zhang, Hong
Fan, Xiongfei
Zhang, Yu
Source :
Biomimetics (2313-7673). Aug2023, Vol. 8 Issue 4, p356. 18p.
Publication Year :
2023

Abstract

Semantic segmentation predicts dense pixel-wise semantic labels, which is crucial for autonomous environment perception systems. For applications on mobile devices, current research focuses on energy-efficient segmenters for both frame and event-based cameras. However, there is currently no artificial neural network (ANN) that can perform efficient segmentation on both types of images. This paper introduces spiking neural network (SNN, a bionic model that is energy-efficient when implemented on neuromorphic hardware) and develops a Spiking Context Guided Network (Spiking CGNet) with substantially lower energy consumption and comparable performance for both frame and event-based images. First, this paper proposes a spiking context guided block that can extract local features and context information with spike computations. On this basis, the directly-trained SCGNet-S and SCGNet-L are established for both frame and event-based images. Our method is verified on the frame-based dataset Cityscapes and the event-based dataset DDD17. On the Cityscapes dataset, SCGNet-S achieves comparable results to ANN CGNet with 4.85 × energy efficiency. On the DDD17 dataset, Spiking CGNet outperforms other spiking segmenters by a large margin. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*ENERGY consumption

Details

Language :
English
ISSN :
23137673
Volume :
8
Issue :
4
Database :
Academic Search Index
Journal :
Biomimetics (2313-7673)
Publication Type :
Academic Journal
Accession number :
170709598
Full Text :
https://doi.org/10.3390/biomimetics8040356