Back to Search Start Over

Consistent Robust Adversarial Prediction for General Multiclass Classification

Authors :
Fathony, Rizal
Asif, Kaiser
Liu, Anqi
Bashiri, Mohammad Ali
Xing, Wei
Behpour, Sima
Zhang, Xinhua
Ziebart, Brian D.
Publication Year :
2018

Abstract

We propose a robust adversarial prediction framework for general multiclass classification. Our method seeks predictive distributions that robustly optimize non-convex and non-continuous multiclass loss metrics against the worst-case conditional label distributions (the adversarial distributions) that (approximately) match the statistics of the training data. Although the optimized loss metrics are non-convex and non-continuous, the dual formulation of the framework is a convex optimization problem that can be recast as a risk minimization model with a prescribed convex surrogate loss we call the adversarial surrogate loss. We show that the adversarial surrogate losses fill an existing gap in surrogate loss construction for general multiclass classification problems, by simultaneously aligning better with the original multiclass loss, guaranteeing Fisher consistency, enabling a way to incorporate rich feature spaces via the kernel trick, and providing competitive performance in practice.<br />Comment: 49 pages, 10 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1812.07526
Document Type :
Working Paper