Back to Search Start Over

Sparse Classification: a scalable discrete optimization perspective

Authors :
Bertsimas, Dimitris
Pauphilet, Jean
Van Parys, Bart
Source :
Machine Learning, 2021
Publication Year :
2017

Abstract

We formulate the sparse classification problem of $n$ samples with $p$ features as a binary convex optimization problem and propose a cutting-plane algorithm to solve it exactly. For sparse logistic regression and sparse SVM, our algorithm finds optimal solutions for $n$ and $p$ in the $10,000$s within minutes. On synthetic data our algorithm achieves perfect support recovery in the large sample regime. Namely, there exists a $n_0$ such that the algorithm takes a long time to find the optimal solution and does not recover the correct support for $n<n_0$, while for $n\geqslant n_0$, the algorithm quickly detects all the true features, and does not return any false features. In contrast, while Lasso accurately detects all the true features, it persistently returns incorrect features, even as the number of observations increases. Consequently, on numerous real-world experiments, our outer-approximation algorithms returns sparser classifiers while achieving similar predictive accuracy as Lasso. To support our observations, we analyze conditions on the sample size needed to ensure full support recovery in classification. Under some assumptions on the data generating process, we prove that information-theoretic limitations impose $n_0 < C \left(2 + \sigma^2\right) k \log(p-k)$, for some constant $C>0$.

Details

Database :
arXiv
Journal :
Machine Learning, 2021
Publication Type :
Report
Accession number :
edsarx.1710.01352
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/s10994-021-06085-5