Back to Search Start Over

Modeling and Optimizing the Scaling Performance in Distributed Deep Learning Training

Authors :
Liu, Ting
Miao, Tianhao
Wu, Qinghua
Li, Zhenyi
He, Guangxin
Wu, Jiaoren
Zhang, Shengzhuo
Yang, Xingwu
Tyson, Gareth
Xie, Gaogang
Liu, Ting
Miao, Tianhao
Wu, Qinghua
Li, Zhenyi
He, Guangxin
Wu, Jiaoren
Zhang, Shengzhuo
Yang, Xingwu
Tyson, Gareth
Xie, Gaogang
Publication Year :
2022

Abstract

Distributed Deep Learning (DDL) is widely used to accelerate deep neural network training for various Web applications. In each iteration of DDL training, each worker synchronizes neural network gradients with other workers. This introduces communication overhead and degrades the scaling performance. In this paper, we propose a recursive model, OSF (Scaling Factor considering Overlap), for estimating the scaling performance of DDL training of neural network models, given the settings of the DDL system. OSF captures two main characteristics of DDL training: the overlap between computation and communication, and the tensor fusion for batching updates. Measurements on a real-world DDL system show that OSF obtains a low estimation error (ranging from 0.5% to 8.4% for different models). Using OSF, we identify the factors that degrade the scaling performance, and propose solutions to effectively mitigate their impacts. Specifically, the proposed adaptive tensor fusion improves the scaling performance by 32.2%150% compared to the constant tensor fusion buffer size. © 2022 Owner/Author.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1331262814
Document Type :
Electronic Resource