Back to Search
Start Over
SLAQ
- Source :
- SoCC
- Publication Year :
- 2017
- Publisher :
- ACM, 2017.
-
Abstract
- Training machine learning (ML) models with large datasets can incur significant resource contention on shared clusters. This training typically involves many iterations that continually improve the quality of the model. Yet in exploratory settings, better models can be obtained faster by directing resources to jobs with the most potential for improvement. We describe SLAQ, a cluster scheduling system for approximate ML training jobs that aims to maximize the overall job quality. When allocating cluster resources, SLAQ explores the quality-runtime trade-offs across multiple jobs to maximize system-wide quality improvement. To do so, SLAQ leverages the iterative nature of ML training algorithms, by collecting quality and resource usage information from concurrent jobs, and then generating highly-tailored quality-improvement predictions for future iterations. Experiments show that SLAQ achieves an average quality improvement of up to 73% and an average delay reduction of up to 44% on a large set of ML training jobs, compared to resource fairness schedulers.<br />Appeared in the 1st SysML Conference. Full paper published in ACM SoCC 2017
- Subjects :
- FOS: Computer and information sciences
020203 distributed computing
Approximate computing
Quality management
business.industry
Computer science
Resource contention
02 engineering and technology
Machine learning
computer.software_genre
Scheduling system
Scheduling (computing)
Job quality
Computer Science - Distributed, Parallel, and Cluster Computing
020204 information systems
0202 electrical engineering, electronic engineering, information engineering
Distributed, Parallel, and Cluster Computing (cs.DC)
Artificial intelligence
business
computer
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the 2017 Symposium on Cloud Computing
- Accession number :
- edsair.doi.dedup.....fb824d6119b06e05a0144a9280e94533