1. An approach to optimizing abstaining area for small sample data classification
- Author
-
Blaise Hanczar, Jean-Daniel Zucker, Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), Université d'Évry-Val-d'Essonne (UEVE), Unité de modélisation mathématique et informatique des systèmes complexes [Bondy] (UMMISCO), Institut de Recherche pour le Développement (IRD)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Université de Yaoundé I-Institut de la francophonie pour l'informatique-Université Cheikh Anta Diop [Dakar, Sénégal] (UCAD)-Université Gaston Bergé (Saint-Louis, Sénégal)-Université Cadi Ayyad [Marrakech] (UCA), and Université Cadi Ayyad [Marrakech] (UCA)-Université de Yaoundé I-Université Gaston Bergé (Saint-Louis, Sénégal)-Université Cheikh Anta Diop [Dakar, Sénégal] (UCAD)-Institut de la francophonie pour l'informatique-Université Pierre et Marie Curie - Paris 6 (UPMC)
- Subjects
Abstaining classifier ,Reject option ,Feature vector ,Posterior probability ,Data classification ,Kernel density estimation ,Bayesian probability ,02 engineering and technology ,01 natural sciences ,Normal distribution ,010104 statistics & probability ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Mathematics ,business.industry ,Supervised learning ,Supervised leaming ,General Engineering ,Estimator ,Pattern recognition ,ROC curve estimation ,Small-sample setting ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
International audience; Given a classification task, an approach to improve accuracy relies on the use of abstaining classifiers. These classifiers are trained to reject observations for which predicted values are not reliable enough: these rejected observations belong to an abstaining area in the feature space. Two equivalent methods exist to theoretically compute the optimal abstaining area for a given classification problem. The first one is based on the posterior probability computed by the model and the other is based on the derivative of the ROC function of the model. Although the second method has proved to give the best results, in small-sample settings such as the one found in omics data, the estimation of posterior probabilities and derivative of ROC curve are both lacking of precision leading to far from optimal abstaining areas. As a consequence none of the two methods bring the expected improvements in accuracy. We propose five alternative algorithms to compute the abstaining area adapted to small-sample problems. The idea of these algorithms is to compute an accurate and robust estimation of the ROC curve and its derivatives. These estimation are mainly based on the assumption that the distribution of the output of the classifier for each class is normal or mixture of normal distributions. These distributions are estimated by a kernel density estimator or Bayesian semiparametric estimator. Another method works on the approximation of the convex hull of the ROC curve. Once the derivative of the ROC curve are estimated, the optimal abstaining area can be directly computed. The performance of our algorithms are directly related to their capacity to compute an accurate estimation of the ROC curve. A sensitivity analysis of our methods to the dataset size and rejection cost has been done on a set of experiments. We show that our methods improve the performances of the abstaining classifiers on several real datasets and for different learning algorithms.
- Published
- 2018