Back to Search
Start Over
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
- Publication Year :
- 2020
-
Abstract
- We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to adversarial classification models proposed earlier and to maximum-margin classifiers. We also provide a reformulation of the distributionally robust model for linear classification, and show it is equivalent to minimizing a regularized ramp loss objective. Numerical experiments show that, despite the nonconvexity of this formulation, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain class of distributions, the only stationary point of the regularized ramp loss minimization problem is the global minimizer.<br />Comment: 32 pages
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2005.13815
- Document Type :
- Working Paper