1. Efficient Training of Large Language Models on Distributed Infrastructures: A Survey
- Author
-
Duan, Jiangfei, Zhang, Shuo, Wang, Zerui, Jiang, Lijuan, Qu, Wenwen, Hu, Qinghao, Wang, Guoteng, Weng, Qizhen, Yan, Hang, Zhang, Xingcheng, Qiu, Xipeng, Lin, Dahua, Wen, Yonggang, Jin, Xin, Zhang, Tianwei, and Sun, Peng
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Large Language Models (LLMs) like GPT and LLaMA are revolutionizing the AI industry with their sophisticated capabilities. Training these models requires vast GPU clusters and significant computing time, posing major challenges in terms of scalability, efficiency, and reliability. This survey explores recent advancements in training systems for LLMs, including innovations in training infrastructure with AI accelerators, networking, storage, and scheduling. Additionally, the survey covers parallelism strategies, as well as optimizations for computation, communication, and memory in distributed LLM training. It also includes approaches of maintaining system reliability over extended training periods. By examining current innovations and future directions, this survey aims to provide valuable insights towards improving LLM training systems and tackling ongoing challenges. Furthermore, traditional digital circuit-based computing systems face significant constraints in meeting the computational demands of LLMs, highlighting the need for innovative solutions such as optical computing and optical networks.
- Published
- 2024