Back to Search
Start Over
Scalable Computation of Regularized Precision Matrices via Stochastic Optimization
- Publication Year :
- 2015
-
Abstract
- We consider the problem of computing a positive definite $p \times p$ inverse covariance matrix aka precision matrix $\theta=(\theta_{ij})$ which optimizes a regularized Gaussian maximum likelihood problem, with the elastic-net regularizer $\sum_{i,j=1}^{p} \lambda (\alpha|\theta_{ij}| + \frac{1}{2}(1- \alpha) \theta_{ij}^2),$ with regularization parameters $\alpha \in [0,1]$ and $\lambda>0$. The associated convex semidefinite optimization problem is notoriously difficult to scale to large problems and has demanded significant attention over the past several years. We propose a new algorithmic framework based on stochastic proximal optimization (on the primal problem) that can be used to obtain near optimal solutions with substantial computational savings over deterministic algorithms. A key challenge of our work stems from the fact that the optimization problem being investigated does not satisfy the usual assumptions required by stochastic gradient methods. Our proposal has (a) computational guarantees and (b) scales well to large problems, even if the solution is not too sparse; thereby, enhancing the scope of regularized maximum likelihood problems to many large-scale problems of contemporary interest. An important aspect of our proposal is to bypass the \emph{deterministic} computation of a matrix inverse by drawing random samples from a suitable multivariate Gaussian distribution.<br />Comment: 42 pages
- Subjects :
- Mathematics - Statistics Theory
Mathematics - Optimization and Control
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1509.00426
- Document Type :
- Working Paper