Back to Search Start Over

Inference Optimization of Foundation Models on AI Accelerators

Authors :
Park, Youngsuk
Budhathoki, Kailash
Chen, Liangfu
Kübler, Jonas
Huang, Jiaji
Kleindessner, Matthäus
Huan, Jun
Cevher, Volkan
Wang, Yida
Karypis, George
Publication Year :
2024

Abstract

Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI across various industries. Industry and research community have witnessed a large number of new applications, based on those foundation models. Such applications include question and answer, customer services, image and video generation, and code completions, among others. However, as the number of model parameters reaches to hundreds of billions, their deployment incurs prohibitive inference costs and high latency in real-world scenarios. As a result, the demand for cost-effective and fast inference using AI accelerators is ever more higher. To this end, our tutorial offers a comprehensive discussion on complementary inference optimization techniques using AI accelerators. Beginning with an overview of basic Transformer architectures and deep learning system frameworks, we deep dive into system optimization techniques for fast and memory-efficient attention computations and discuss how they can be implemented efficiently on AI accelerators. Next, we describe architectural elements that are key for fast transformer inference. Finally, we examine various model compression and fast decoding strategies in the same context.<br />Comment: Tutorial published at KDD 2024. Camera-ready version

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.09111
Document Type :
Working Paper