1. A Quantitative Evaluation of Approximate Softmax Functions for Deep Neural Networks
- Author
-
Elizondo-Fernández, Fabricio, León-Vega, Luis G., Meinhardt, Cristina, and Castro-Godínez, Jorge
- Subjects
Computer Science - Hardware Architecture ,Electrical Engineering and Systems Science - Signal Processing - Abstract
The softmax function is used as an activation function placed in the output layer of a neural network. It allows extracting the probabilities of the output classes, while introduces a non-linearity to the model. In the field of low-end FPGAs, implementations of Deep Neural Networks (DNNs) require the exploration of optimisation techniques to improve computational efficiency and hardware resource consumption. This work explores approximate computing techniques to implement the softmax function, using Taylor and Pad\'e approximations, and interpolation methods with Look-Up Tables (LUTs). The introduction of approximations aims to reduce the required execution time while reducing the precision of results produced by the softmax function. Each implementation is evaluated using Root Mean Square Error (RMSE) for accuracy assessment, and individual performance is verified by taking measurements of execution times. From our evaluation, quadratic interpolation with LUTs achieves the lowest error, but in terms of performance, Taylor and Pad\'e approximations show better execution times, which highlights the existing design trade-off between numerical accuracy and power consumption.
- Published
- 2025