1. Two-Way Transpose Multibit 6T SRAM Computing-in-Memory Macro for Inference-Training AI Edge Chips
- Author
-
Yen-Lin Chung, Ting-Wei Chang, Ren-Shuo Liu, Fu-Chun Chang, Jian-Wei Su, Sih-Han Li, Hongwu Jiang, Shimeng Yu, Ta-Wei Liu, Yung-Ning Tu, Kea-Tiong Tang, Chung-Chuan Lo, Meng-Fan Chang, Shanshi Huang, Yuan Wu, Wei-Hsing Huang, Yen-Chi Chou, Chih-Cheng Hsieh, Pei-Jung Lu, Jing-Hong Wang, Ruhui Liu, Jin-Sheng Ren, Chih-I Wu, Xin Si, and Shyh-Shyuan Sheu
- Subjects
Process variation ,Edge device ,Computer science ,Computation ,Transpose ,Static random-access memory ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Macro ,Computational science ,Efficient energy use - Abstract
Computing-in-memory (CIM) based on SRAM is a promising approach to achieving energy-efficient multiply-and-accumulate (MAC) operations in artificial intelligence (AI) edge devices; however, existing SRAM-CIM chips support only DNN inference. The flow of training data requires that CIM arrays perform convolutional computation using transposed weight matrices. This article presents a two-way transpose (TWT) multiply cell with high resistance to process variation and a novel read scheme that uses input-aware zone prediction of maximum partial MAC values to enhance the signal margin for robust readout. A 28-nm 64-kb TWT CIM macro fabricated using foundry-provided compact 6T-SRAM cells achieved ${T_{AC}}$ of 3.8-21 ns and energy efficiency of 7-61.1 TOPS/W in performing MAC operations using 2-8-b inputs, 4-8-b weights, and 10-20-b outputs.
- Published
- 2022