Back to Search Start Over

DeepFire2: A Convolutional Spiking Neural Network Accelerator on FPGAs

Authors :
Aung, Myat Thu Linn
Gerlinghoff, Daniel
Qu, Chuping
Yang, Liwei
Huang, Tian
Goh, Rick Siow Mong
Luo, Tao
Wong, Weng-Fai
Publication Year :
2023

Abstract

Brain-inspired spiking neural networks (SNNs) replace the multiply-accumulate operations of traditional neural networks by integrate-and-fire neurons, with the goal of achieving greater energy efficiency. Specialized hardware implementations of those neurons clearly have advantages over general-purpose devices in terms of power and performance, but exhibit poor scalability when it comes to accelerating large neural networks. DeepFire2 introduces a hardware architecture which can map large network layers efficiently across multiple super logic regions in a multi-die FPGA. That gives more control over resource allocation and parallelism, benefiting both throughput and energy consumption. Avoiding the use of lookup tables to implement the AND operations of an SNN, prevents the layer size to be limited by logic resources. A deep pipeline does not only lead to an increased clock speed of up to 600 MHz. We double the throughput and power efficiency compared to our previous version of DeepFire, which equates to an almost 10-fold improvement over other previous implementations. Importantly, we are able to deploy a large ImageNet model, while maintaining a throughput of over 1500 frames per second.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.05187
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TC.2023.3272284