Back to Search Start Over

Fast and Straggler-Tolerant Distributed SGD with Reduced Computation Load

Authors :
Egger, Maximilian
Hanna, Serge Kas
Bitar, Rawad
Publication Year :
2023

Abstract

In distributed machine learning, a central node outsources computationally expensive calculations to external worker nodes. The properties of optimization procedures like stochastic gradient descent (SGD) can be leveraged to mitigate the effect of unresponsive or slow workers called stragglers, that otherwise degrade the benefit of outsourcing the computation. This can be done by only waiting for a subset of the workers to finish their computation at each iteration of the algorithm. Previous works proposed to adapt the number of workers to wait for as the algorithm evolves to optimize the speed of convergence. In contrast, we model the communication and computation times using independent random variables. Considering this model, we construct a novel scheme that adapts both the number of workers and the computation load throughout the run-time of the algorithm. Consequently, we improve the convergence speed of distributed SGD while significantly reducing the computation load, at the expense of a slight increase in communication load.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.08589
Document Type :
Working Paper