Back to Search Start Over

ML$^2$Tuner: Efficient Code Tuning via Multi-Level Machine Learning Models

Authors :
Cha, JooHyoung
Lee, Munyoung
Kwon, Jinse
Lee, Jubin
Lee, Jemin
Kwon, Yongin
Publication Year :
2024

Abstract

The increasing complexity of deep learning models necessitates specialized hardware and software optimizations, particularly for deep learning accelerators. Existing autotuning methods often suffer from prolonged tuning times due to profiling invalid configurations, which can cause runtime errors. We introduce ML$^2$Tuner, a multi-level machine learning tuning technique that enhances autotuning efficiency by incorporating a validity prediction model to filter out invalid configurations and an advanced performance prediction model utilizing hidden features from the compilation process. Experimental results on an extended VTA accelerator demonstrate that ML$^2$Tuner achieves equivalent performance improvements using only 12.3% of the samples required with a similar approach as TVM and reduces invalid profiling attempts by an average of 60.8%, Highlighting its potential to enhance autotuning performance by filtering out invalid configurations<br />Comment: Accepted in NeurIPS 2024 workshop on Machine Learning for Systems, 12 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.10764
Document Type :
Working Paper