1. Early Prediction of DNN Activation Using Hierarchical Computations
- Author
-
Bharathwaj Suresh, Kamlesh Pillai, Gurpreet Singh Kalsi, Avishaii Abuhatzera, and Sreenivas Subramoney
- Subjects
DNN ,ReLU ,floating-point numbers ,hardware acceleration ,Mathematics ,QA1-939 - Abstract
Deep Neural Networks (DNNs) have set state-of-the-art performance numbers in diverse fields of electronics (computer vision, voice recognition), biology, bioinformatics, etc. However, the process of learning (training) from the data and application of the learnt information (inference) process requires huge computational resources. Approximate computing is a common method to reduce computation cost, but it introduces loss in task accuracy, which limits their application. Using an inherent property of Rectified Linear Unit (ReLU), a popular activation function, we propose a mathematical model to perform MAC operation using reduced precision for predicting negative values early. We also propose a method to perform hierarchical computation to achieve the same results as IEEE754 full precision compute. Applying this method on ResNet50 and VGG16 shows that up to 80% of ReLU zeros (which is 50% of all ReLU outputs) can be predicted and detected early by using just 3 out of 23 mantissa bits. This method is equally applicable to other floating-point representations.
- Published
- 2021
- Full Text
- View/download PDF