Back to Search Start Over

More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server.

Authors :
Ho Q
Cipar J
Cui H
Kim JK
Lee S
Gibbons PB
Gibson GA
Ganger GR
Xing EP
Source :
Advances in neural information processing systems [Adv Neural Inf Process Syst] 2013; Vol. 2013, pp. 1223-1231.
Publication Year :
2013

Abstract

We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes.

Details

Language :
English
ISSN :
1049-5258
Volume :
2013
Database :
MEDLINE
Journal :
Advances in neural information processing systems
Publication Type :
Academic Journal
Accession number :
25400488