Back to Search Start Over

SwiftDiffusion: Efficient Diffusion Model Serving with Add-on Modules

Authors :
Li, Suyi
Yang, Lingyun
Jiang, Xiaoxiao
Lu, Hanfeng
Di, Zhipeng
Lu, Weiyi
Chen, Jiawei
Liu, Kan
Yu, Yinghao
Lan, Tao
Yang, Guodong
Qu, Lin
Zhang, Liping
Wang, Wei
Publication Year :
2024

Abstract

This paper documents our characterization study and practices for serving text-to-image requests with stable diffusion models in production. We first comprehensively analyze inference request traces for commercial text-to-image applications. It commences with our observation that add-on modules, i.e., ControlNets and LoRAs, that augment the base stable diffusion models, are ubiquitous in generating images for commercial applications. Despite their efficacy, these add-on modules incur high loading overhead, prolong the serving latency, and swallow up expensive GPU resources. Driven by our characterization study, we present SwiftDiffusion, a system that efficiently generates high-quality images using stable diffusion models and add-on modules. To achieve this, SwiftDiffusion reconstructs the existing text-to-image serving workflow by identifying the opportunities for parallel computation and distributing ControlNet computations across multiple GPUs. Further, SwiftDiffusion thoroughly analyzes the dynamics of image generation and develops techniques to eliminate the overhead associated with LoRA loading and patching while preserving the image quality. Last, SwiftDiffusion proposes specialized optimizations in the backbone architecture of the stable diffusion models, which are also compatible with the efficient serving of add-on modules. Compared to state-of-the-art text-to-image serving systems, SwiftDiffusion reduces serving latency by up to 5x and improves serving throughput by up to 2x without compromising image quality.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.02031
Document Type :
Working Paper