Back to Search
Start Over
Comparative Study of Different Reduced Precision Techniques in Deep Neural Network
- Source :
- Proceedings of International Conference on Big Data, Machine Learning and their Applications ISBN: 9789811583766
- Publication Year :
- 2020
- Publisher :
- Springer Singapore, 2020.
-
Abstract
- There has been rising interest in reduced precision in the training of deep neural networks (DNNs) that is used from the single-precision (FP32) to different precision format (FP16, FP8, bfloat16) due to the rapid increase in model sizes, which require less representational space when stored in lower precision. However, training a DNN in reduced precision format (FP16, FP8, and bfloat16) is challenging, because the data format may be inadequate for representing the gradients during backpropagation. In this research paper, we compare the various novel approaches to train a DNN using the different reduced precision formats and explore the challenges that we get during the training of DNN in reduced precision. Besides, we also examine various layers in the neural network which make a significant reduction in the backpropagation and also observe that when should get sufficient precision.
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of International Conference on Big Data, Machine Learning and their Applications ISBN: 9789811583766
- Accession number :
- edsair.doi...........78472ae30ef9017d2e53f41df8082fe9
- Full Text :
- https://doi.org/10.1007/978-981-15-8377-3_11