Back to Search Start Over

Characterizing and Efficiently Accelerating Multimodal Generation Model Inference

Authors :
Lee, Yejin
Sun, Anna
Hosmer, Basil
Acun, Bilge
Balioglu, Can
Wang, Changhan
Hernandez, Charles David
Puhrsch, Christian
Haziza, Daniel
Guessous, Driss
Massa, Francisco
Kahn, Jacob
Wan, Jeffrey
Reizenstein, Jeremy
Zhai, Jiaqi
Isaacson, Joe
Schlosser, Joel
Pino, Juan
Sadagopan, Kaushik Ram
Shamis, Leonid
Ma, Linjian
Hwang, Min-Jae
Chen, Mingda
Elhoushi, Mostafa
Rodriguez, Pedro
Pasunuru, Ram
Yih, Scott
Popuri, Sravya
Liu, Xing
Wu, Carole-Jean
Publication Year :
2024

Abstract

Generative artificial intelligence (AI) technology is revolutionizing the computing industry. Not only its applications have broadened to various sectors but also poses new system design and optimization opportunities. The technology is capable of understanding and responding in multiple modalities. However, the advanced capability currently comes with significant system resource demands. To sustainably scale generative AI capabilities to billions of users in the world, inference must be fast and efficient. This paper pinpoints key system design and optimization opportunities by characterizing a family of emerging multi-modal generation models on real systems. Auto-regressive token generation is a critical latency performance bottleneck, typically dominated by GPU idle time. In addition to memory-intensive attention across the generative AI models, linear operations constitute significant inference latency due to the feed forward networks in Transformer-based models. We demonstrate that state-of-the-art optimization levers, spanning from applications to system software and hardware, set a 3.88x better baseline.<br />Comment: 13 pages including references. 8 Figures. Under review to HPCA 2025 Industry Track

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.00215
Document Type :
Working Paper