1. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox
- Author
-
Francisco J. Valverde-Albacete and Carmen Peláez-Moreno
- Subjects
Information transfer ,Classification accuracy ,Combinatorial analysis ,Computer science ,lcsh:Medicine ,Word error rate ,02 engineering and technology ,computer.software_genre ,Engineering ,Theoretical ,Models ,Information ,0202 electrical engineering, electronic engineering, information engineering ,Data Mining ,Statistical Signal Processing ,Mental task ,Theory ,Entropy (energy dispersal) ,lcsh:Science ,Interpretability ,Visual stimulation ,0303 health sciences ,Telecomunicaciones ,Multidisciplinary ,Entropy (statistical thermodynamics) ,Applied Mathematics ,Statistics ,Magnetoencephalography ,020201 artificial intelligence & image processing ,Algorithms ,Research Article ,Computer Modeling ,Normalization (statistics) ,Contingency table ,Machine learning ,Classifier ,Error ,03 medical and health sciences ,Entropy (classical thermodynamics) ,Decision Theory ,Accuracy paradox ,Entropy (information theory) ,Learning ,Humans ,Statistical Methods ,Entropy (arrow of time) ,030304 developmental biology ,business.industry ,lcsh:R ,Contingency Tables ,Normalized information transfer factor ,Models, Theoretical ,Mathematical phenomena ,Computer Science ,Signal Processing ,Predictive power ,lcsh:Q ,Artificial intelligence ,business ,Prediction ,computer ,Controlled study ,Videorecording ,Mathematics ,Entropy (order and disorder) ,Entropy modulated accuracy ,Model - Abstract
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. Francisco José Valverde-Albacete has been partially supported by EU FP7 project LiMoSINe (contract 288024): www.limosine-project.eu Carmen Peláez Moreno has been partially supported by the Spanish Government-Comisión Interministerial de Ciencia y Tecnología project TEC2011–26807.
- Published
- 2014