1. FaaSTube: Optimizing GPU-oriented Data Transfer for Serverless Computing
- Author
-
Wu, Hao, Deng, Junxiao, Yu, Minchen, Yu, Yue, Liu, Yaochen, Fan, Hao, Wu, Song, and Wang, Wei
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Serverless computing has gained significant traction for machine learning inference applications, which are often deployed as serverless workflows consisting of multiple CPU and GPU functions with data dependency. However, existing data-passing solutions for serverless computing primarily reply on host memory for fast data transfer, mandating substantial data movement and resulting in salient I/O overhead. In this paper, we present FaaSTube, a GPU-efficient data passing system for serverless inference. FaaSTube manages intermediate data within a GPU memory pool to facilitate direct data exchange between GPU functions. It enables fine-grained bandwidth sharing over PCIe and NVLink, minimizing data-passing latency for both host-to-GPU and GPU-to-GPU while providing performance isolation between functions. Additionally, FaaSTube implements an elastic GPU memory pool that dynamically scales to accommodate varying data-passing demands. Evaluations on real-world applications show that FaaSTube reduces end-to-end latency by up to 90\% and achieves up to 12x higher throughput compared to the state-of-the-art.
- Published
- 2024