Back to Search Start Over

Evaluations on Deep Neural Networks Training Using Posit Number System

Authors :
Zhongfeng Wang
Chao Fang
Jun Lin
Jinming Lu
Xu Mingyang
Source :
IEEE Transactions on Computers. 70:174-187
Publication Year :
2021
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2021.

Abstract

The training of Deep Neural Networks (DNNs) brings enormous memory requirements and computational complexity, which makes it a challenge to train DNN models on resource-constrained devices. Training DNNs with reduced-precision data representation is crucial to mitigate this problem. In this article, we conduct a thorough investigation on training DNNs with low-bit posit numbers, a Type-III universal number (Unum). Through a comprehensive analysis of quantization with various data formats, it is demonstrated that the posit format shows great potential to be employed in the training of DNNs. Moreover, a DNN training framework using 8-bit posit is proposed with a novel tensor-wise scaling scheme. The experiments show the same performance as the state-of-the-art (SOTA) across multiple datasets (MNIST, CIFAR-10, ImageNet, and Penn Treebank) and model architectures (LeNet-5, AlexNet, ResNet, MobileNet-V2, and LSTM). We further design an energy-efficient hardware prototype for our framework. Compared to the standard floating-point counterpart, our design achieves a reduction of 68, 51, and 75 percent in terms of area, power, and memory capacity, respectively.

Details

ISSN :
23263814 and 00189340
Volume :
70
Database :
OpenAIRE
Journal :
IEEE Transactions on Computers
Accession number :
edsair.doi...........873cd59ec6e524c69f77931bd04ae596