1. Always-On Sub-Microwatt Spiking Neural Network Based on Spike-Driven Clock- and Power-Gating for an Ultra-Low-Power Intelligent Device
- Author
-
Joao P. Cerqueira, Sung Justin Kim, Joonsung Kang, Sang Joon Kim, Minhao Yang, Dewei Wang, Pavan Kumar Chundi, Seungchul Jung, and Mingoo Seok
- Subjects
Power gating ,Computer science ,Neurosciences. Biological psychiatry. Neuropsychiatry ,02 engineering and technology ,clock and power gating ,spiking neural network ,Reduction (complexity) ,Event-driven architecture ,0202 electrical engineering, electronic engineering, information engineering ,Original Research ,Spiking neural network ,business.industry ,always-on device ,General Neuroscience ,020202 computer hardware & architecture ,CMOS ,Scalability ,neuromorphic hardware ,020201 artificial intelligence & image processing ,Spike (software development) ,event-driven architecture ,business ,Electrical efficiency ,Computer hardware ,RC321-571 ,Neuroscience - Abstract
This paper presents a novel spiking neural network (SNN) classifier architecture for enabling always-on artificial intelligent (AI) functions, such as keyword spotting (KWS) and visual wake-up, in ultra-low-power internet-of-things (IoT) devices. Such always-on hardware tends to dominate the power efficiency of an IoT device and therefore it is paramount to minimize its power dissipation. A key observation is that the input signal to always-on hardware is typically sparse in time. This is a great opportunity that a SNN classifier can leverage because the switching activity and the power consumption of SNN hardware can scale with spike rate. To leverage this scalability, the proposed SNN classifier architecture employs event-driven architecture, especially fine-grained clock generation and gating and fine-grained power gating, to obtain very low static power dissipation. The prototype is fabricated in 65 nm CMOS and occupies an area of 1.99 mm2. At 0.52 V supply voltage, it consumes 75 nW at no input activity and less than 300 nW at 100% input activity. It still maintains competitive inference accuracy for KWS and other always-on classification workloads. The prototype achieved a power consumption reduction of over three orders of magnitude compared to the state-of-the-art for SNN hardware and of about 2.3X compared to the state-of-the-art KWS hardware.
- Published
- 2021