Back to Search Start Over

Weighted SGD for ℓ p Regression with Randomized Preconditioning.

Authors :
Yang J
Chow YL
Ré C
Mahoney MW
Source :
Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms [Proc Annu ACM SIAM Symp Discret Algorithms] 2016 Jan; Vol. 2016, pp. 558-569.
Publication Year :
2016

Abstract

In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ <subscript>2</subscript> and ℓ <subscript>1</subscript> regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ <subscript>p</subscript> regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ <subscript>p</subscript> solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ <subscript>1</subscript> regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n ·nnz( A )+poly( d )/ ε <superscript>2</superscript> ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ <subscript>2</subscript> regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in 𝒪(log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ <subscript>2</subscript> regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d <superscript>2</superscript> / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 <superscript>-3</superscript> , more quickly.

Details

Language :
English
ISSN :
1557-9468
Volume :
2016
Database :
MEDLINE
Journal :
Proceedings of the ... Annual ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM Symposium on Discrete Algorithms
Publication Type :
Academic Journal
Accession number :
29782626
Full Text :
https://doi.org/10.1137/1.9781611974331.ch41