Back to Search
Start Over
8T XNOR-SRAM based Parallel Compute-in-Memory for Deep Neural Network Accelerator
- Source :
- MWSCAS
- Publication Year :
- 2020
- Publisher :
- IEEE, 2020.
-
Abstract
- A 65nm XNOR-SRAM macro is presented for binary deep neural network (DNN) accelerator. It features 1) a custom XNOR-SRAM bit-cell design with 8 transistors (8T); 2) a fully parallel compute-in-memory capability of XNOR bit-counting for the inference. Multi-level sense amplifier is employed as Flash analog-to-digital converter (ADC) at edge of the array. The impact of ADC offset on the algorithm-level accuracy is statistically evaluated on a VGG-like XNOR-Net with the CIFAR-10 dataset, showing a marginal degradation ~1.77%. Read-disturb of SRAM bit-cell during parallel computation is also evaluated and its impact is minimized by lowering the word-line (WL) voltage. Silicon measurement results show that the proposed XNOR-SRAM unit-macro achieves a fast access time (2.3 ns) for parallel bit-counting and a high energy efficiency (44.8-62.8) TOPS/W depending on the WL voltage.
- Subjects :
- Artificial neural network
Sense amplifier
Computer science
020208 electrical & electronic engineering
Transistor
02 engineering and technology
010501 environmental sciences
01 natural sciences
law.invention
XNOR gate
law
0202 electrical engineering, electronic engineering, information engineering
Electronic engineering
Static random-access memory
Access time
0105 earth and related environmental sciences
Voltage
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS)
- Accession number :
- edsair.doi...........7742387b8d7241e25dad90a54f74e5c7
- Full Text :
- https://doi.org/10.1109/mwscas48704.2020.9184455