Back to Search Start Over

Analysis of Distributed Deep Learning in the Cloud

Authors :
Sharma, Aakash
Bhasi, Vivek M.
Singh, Sonali
Jain, Rishabh
Gunasekaran, Jashwant Raj
Mitra, Subrata
Kandemir, Mahmut Taylan
Kesidis, George
Das, Chita R.
Publication Year :
2022

Abstract

We aim to resolve this problem by introducing a comprehensive distributed deep learning (DDL) profiler, which can determine the various execution "stalls" that DDL suffers from while running on a public cloud. We have implemented the profiler by extending prior work to additionally estimate two types of communication stalls - interconnect and network stalls. We train popular DNN models using the profiler to characterize various AWS GPU instances and list their advantages and shortcomings for users to make an informed decision. We observe that the more expensive GPU instances may not be the most performant for all DNN models and AWS may sub-optimally allocate hardware interconnect resources. Specifically, the intra-machine interconnect can introduce communication overheads up to 90% of DNN training time and network-connected instances can suffer from up to 5x slowdown compared to training on a single instance. Further, we model the impact of DNN macroscopic features such as the number of layers and the number of gradients on communication stalls. Finally, we propose a measurement-based recommendation model for users to lower their public cloud monetary costs for DDL, given a time budget.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2208.14344
Document Type :
Working Paper