Back to Search Start Over

Optimising Resource Management for Embedded Machine Learning

Authors :
Xun, Lei
Tran-Thanh, Long
Al-Hashimi, Bashir M
Merrett, Geoff V.
Publication Year :
2021

Abstract

Machine learning inference is increasingly being executed locally on mobile and embedded platforms, due to the clear advantages in latency, privacy and connectivity. In this paper, we present approaches for online resource management in heterogeneous multi-core systems and show how they can be applied to optimise the performance of machine learning workloads. Performance can be defined using platform-dependent (e.g. speed, energy) and platform-independent (accuracy, confidence) metrics. In particular, we show how a Deep Neural Network (DNN) can be dynamically scalable to trade-off these various performance metrics. Achieving consistent performance when executing on different platforms is necessary yet challenging, due to the different resources provided and their capability, and their time-varying availability when executing alongside other workloads. Managing the interface between available hardware resources (often numerous and heterogeneous in nature), software requirements, and user experience is increasingly complex.<br />Comment: Accepted at DATE 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.03608
Document Type :
Working Paper