Back to Search Start Over

Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers

Authors :
Hanna, Serge Kas
Bitar, Rawad
Parag, Parimal
Dasari, Venkat
Rouayheb, Salim El
Source :
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4262--4266, May 2020
Publication Year :
2020

Abstract

We consider the setting where a master wants to run a distributed stochastic gradient descent (SGD) algorithm on $n$ workers each having a subset of the data. Distributed SGD may suffer from the effect of stragglers, i.e., slow or unresponsive workers who cause delays. One solution studied in the literature is to wait at each iteration for the responses of the fastest $k<n$ workers before updating the model, where $k$ is a fixed parameter. The choice of the value of $k$ presents a trade-off between the runtime (i.e., convergence rate) of SGD and the error of the model. Towards optimizing the error-runtime trade-off, we investigate distributed SGD with adaptive $k$. We first design an adaptive policy for varying $k$ that optimizes this trade-off based on an upper bound on the error as a function of the wall-clock time which we derive. Then, we propose an algorithm for adaptive distributed SGD that is based on a statistical heuristic. We implement our algorithm and provide numerical simulations which confirm our intuition and theoretical analysis.<br />Comment: Accepted to IEEE ICASSP 2020

Details

Database :
arXiv
Journal :
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4262--4266, May 2020
Publication Type :
Report
Accession number :
edsarx.2002.11005
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/ICASSP40776.2020.9053961