Back to Search
Start Over
An Energy Efficient Time-Multiplexing Computing-in-Memory Architecture for Edge Intelligence
- Source :
- IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, Vol 8, Iss 2, Pp 111-118 (2022)
- Publication Year :
- 2022
- Publisher :
- IEEE, 2022.
-
Abstract
- The growing data volume and complexity of deep neural networks (DNNs) require new architectures to surpass the limitation of the von-Neumann bottleneck, with computing-in-memory (CIM) as a promising direction for implementing energy-efficient neural networks. However, CIM’s peripheral sensing circuits are usually power- and area-hungry components. We propose a time-multiplexing CIM architecture (TM-CIM) based on memristive analog computing to share the peripheral circuits and process one column at a time. The memristor array is arranged in a column-wise manner that avoids wasting power/energy on unselected columns. In addition, digital-to-analog converter (DAC) power and energy efficiency, which turns out to be an even greater overhead than analog-to-digital converter (ADC), can be fine-tuned in TM-CIM for significant improvement. For a 256*256 crossbar array with a typical setting, TM-CIM saves $18.4\times $ in energy with 0.136 pJ/MAC efficiency, and $19.9\times $ area for 1T1R case and $15.9\times $ for 2T2R case. Performance estimation on VGG-16 indicates that TM-CIM can save over $16\times $ area. A tradeoff between the chip area, peak power, and latency is also presented, with a proposed scheme to further reduce the latency on VGG-16, without significantly increasing chip area and peak power.
Details
- Language :
- English
- ISSN :
- 23299231
- Volume :
- 8
- Issue :
- 2
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Journal on Exploratory Solid-State Computational Devices and Circuits
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.40ce0fc3ed6a428392c3242e85d4799a
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/JXCDC.2022.3206879