1. MuxFlow: efficient GPU sharing in production-level clusters with more than 10000 GPUs.
- Author
-
Liu, Xuanzhe, Zhao, Yihao, Liu, Shufan, Li, Xiang, Zhu, Yibo, Liu, Xin, and Jin, Xin
- Abstract
Large-scale GPU clusters are widely used to speed up both latency-critical (online) and best-effort (offline) deep learning (DL) workloads. However, similar to the common practice, the DL clusters at ByteDance dedicate each GPU to one workload or share workloads in time dimension, leading to very low GPU resource utilization. Existing techniques like NVIDIA MPS provide an opportunity to share multiple workloads in space on widely-deployed NVIDIA GPUs, but it cannot guarantee the performance of online workloads. We present MuxFlow, the first production system that can scale over massive GPUs to support highly efficient space-sharing for DL workloads. MuxFlow introduces a two-level protection mechanism for both memory and computation to guarantee the performance of online workloads. MuxFlow leverages dynamic streaming multiprocessor (SM) allocation to improve the efficiency of offline workloads. Based on our practical error analysis, we design a mixed error-handling mechanism to improve system reliability. MuxFlow has been deployed at ByteDance on more than 18000 GPUs. The deployment results indicate that MuxFlow substantially improves the GPU utilization from 26% to 76%, SM activity from 16% to 33%, and GPU memory usage from 42% to 48%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF