Back to Search Start Over

Asynchronous Parallel, Sparse Approximated SVRG for High-Dimensional Machine Learning

Authors :
Hongying Liu
Jianhui Liu
Jun Fan
Fanhua Shang
Hua Huang
Yuanyuan Liu
Source :
IEEE Transactions on Knowledge and Data Engineering. 34:5636-5648
Publication Year :
2022
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2022.

Abstract

With increasing of data size and development of multi-core computers, asynchronous parallel stochastic optimization algorithms such as KroMagnon have gained significant attention. In this paper, we propose a new Sparse approximation and asynchronous parallel Stochastic Variance Reduced Gradient (SSVRG) method for sparse and high-dimensional machine learning problems. Unlike standard SVRG and its asynchronous parallel variant, KroMagnon, the snapshot point of SSVRG is set to the average of all the iterates in the previous epoch, which allows it to take much larger learning rates and makes it more robust to the choice of learning rates. In particular, we use the sparse approximation of the popular SVRG estimator to perform completely sparse updates. Therefore, SSVRG has a much lower per-iteration computational cost than its dense counterpart, SVRG++, and is very friendly to asynchronous parallel implementation. Moreover, we provide the convergence guarantees of SSVRG for both SC and non-SC problems, while existing asynchronous algorithms (e.g., KroMagnon) only have convergence guarantees for SC problems. Finally, we extend SSVRG to non-smooth and asynchronous parallel settings. Numerical results demonstrate that SSVRG converges significantly faster than the state-of-the-art asynchronous parallel methods, e.g., KroMagnon, and is usually more than three orders of magnitude faster than SVRG++.

Details

ISSN :
23263865 and 10414347
Volume :
34
Database :
OpenAIRE
Journal :
IEEE Transactions on Knowledge and Data Engineering
Accession number :
edsair.doi...........1af75126f360d05f7e9d82e4cba60eba