Back to Search
Start Over
Optimizing DNNs With Partially Equivalent Transformations and Automated Corrections
- Source :
- IEEE Transactions on Computers; December 2023, Vol. 72 Issue: 12 p3546-3560, 15p
- Publication Year :
- 2023
-
Abstract
- Deep neural network (DNN) applications are typically represented by tensor programs. To boost the performance of DNN computations, existing works adopt fully equivalent transformations for tensor program optimization by guaranteeing the equivalence on each element of tensors. However, as there are thousands of elements in a tensor, such optimization misses the opportunities that allow the in-equivalence of minority elements. In this work, we propose <sc>Pet</sc>, the first work that introduces partially equivalent transformations to optimize tensor programs. To maintain the functional equivalence of tensor programs, <sc>Pet</sc> automatically finds and corrects the in-equivalent positions by leveraging the multi-linearity of DNN computations. <sc>Pet</sc> further uses a mutation manager to improve search efficiency. Evaluation results show that <sc>Pet</sc> can achieve up to 1.98<inline-formula><tex-math notation="LaTeX">$\times$</tex-math><alternatives><mml:math><mml:mo>×</mml:mo></mml:math><inline-graphic xlink:href="zhai-ieq1-3307795.gif"/></alternatives></inline-formula> and 2.20<inline-formula><tex-math notation="LaTeX">$\times$</tex-math><alternatives><mml:math><mml:mo>×</mml:mo></mml:math><inline-graphic xlink:href="zhai-ieq2-3307795.gif"/></alternatives></inline-formula> speedups on NVIDIA Tesla A100 and V100 respectively compared with existing DNN frameworks by introducing new optimization opportunities of partially equivalent transformations.
Details
- Language :
- English
- ISSN :
- 00189340 and 15579956
- Volume :
- 72
- Issue :
- 12
- Database :
- Supplemental Index
- Journal :
- IEEE Transactions on Computers
- Publication Type :
- Periodical
- Accession number :
- ejs64455975
- Full Text :
- https://doi.org/10.1109/TC.2023.3307795