Back to Search Start Over

TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading

Authors :
Wu, Kun
Park, Jeongmin Brian
Zhang, Xiaofan
Hidayetoğlu, Mert
Mailthody, Vikram Sharma
Huang, Sitao
Lumetta, Steven Sam
Hwu, Wen-mei
Publication Year :
2024

Abstract

The growth rate of the GPU memory capacity has not been able to keep up with that of the size of large language models (LLMs), hindering the model training process. In particular, activations -- the intermediate tensors produced during forward propagation and reused in backward propagation -- dominate the GPU memory use. To address this challenge, we propose TBA to efficiently offload activations to high-capacity NVMe SSDs. This approach reduces GPU memory usage without impacting performance by adaptively overlapping data transfers with computation. TBA is compatible with popular deep learning frameworks like PyTorch, Megatron, and DeepSpeed, and it employs techniques such as tensor deduplication, forwarding, and adaptive offloading to further enhance efficiency. We conduct extensive experiments on GPT, BERT, and T5. Results demonstrate that TBA effectively reduces 47% of the activation peak memory usage. At the same time, TBA perfectly overlaps the I/O with the computation and incurs negligible performance overhead. We introduce the recompute-offload-keep (ROK) curve to compare the TBA offloading with other two tensor placement strategies, keeping activations in memory and layerwise full recomputation. We find that TBA achieves better memory savings than layerwise full recomputation while retaining the performance of keeping the activations in memory.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10013
Document Type :
Working Paper