Back to Search
Start Over
Energy-Efficient Neural Network Acceleration Using Most Significant Bit-Guided Approximate Multiplier.
- Source :
- Electronics (2079-9292); Aug2024, Vol. 13 Issue 15, p3034, 17p
- Publication Year :
- 2024
-
Abstract
- The escalating computational demands of deep learning and large-scale models have led to a significant increase in energy consumption, highlighting the urgent need for more energy-efficient hardware designs. This study presents a novel weight approximation strategy specifically designed for quantized neural networks (NNs), resulting in the development of an efficient approximate multiplier leveraging most significant one (MSO) shifting. Compared to both energy-efficient logarithmic approximate multipliers and accuracy-prioritized non-logarithmic approximate multipliers, our proposed logarithmic-like design achieves an unparalleled balance between accuracy and hardware costs. When compared with the baseline exact multiplier, our innovative design exhibits remarkable reductions, encompassing a decrease of up to 28.31% in area, a notable 57.84% reduction in power consumption, and a diminution of 11.86% in delay. Experimental outcomes reveal that the proposed multiplier, when applied in neural networks, can conserve approximately 60% of energy without compromising task accuracy. Concurrently, experiments focused on the transformer accelerator and image processing illustrate the substantial energy savings that can be obtained for Large Language Models (LLMs) and image processing tasks through the implementation of our proposed design, further validating its efficacy and practicality. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 20799292
- Volume :
- 13
- Issue :
- 15
- Database :
- Complementary Index
- Journal :
- Electronics (2079-9292)
- Publication Type :
- Academic Journal
- Accession number :
- 178947696
- Full Text :
- https://doi.org/10.3390/electronics13153034