Back to Search Start Over

When is the Naive Bayes approximation not so naive?

Authors :
Ana Ruiz Linares
Christopher R. Stephens
Hugo Flores Huerta
Source :
Machine Learning. 107:397-441
Publication Year :
2017
Publisher :
Springer Science and Business Media LLC, 2017.

Abstract

The Naive Bayes approximation (NBA) and associated classifier are widely used and offer robust performance across a large spectrum of problem domains. As it depends on a very strong assumption--independence among features--this has been somewhat puzzling. Various hypotheses have been put forward to explain its success and many generalizations have been proposed. In this paper we propose a set of "local" error measures--associated with the likelihood functions for subsets of attributes and for each class--and show explicitly how these local errors combine to give a "global" error associated to the full attribute set. By so doing we formulate a framework within which the phenomenon of error cancelation, or augmentation, can be quantified and its impact on classifier performance estimated and predicted a priori. These diagnostics allow us to develop a deeper and more quantitative understanding of why the NBA is so robust and under what circumstances one expects it to break down. We show how these diagnostics can be used to select which features to combine and use them in a simple generalization of the NBA, applying the resulting classifier to a set of real world data sets.

Details

ISSN :
15730565 and 08856125
Volume :
107
Database :
OpenAIRE
Journal :
Machine Learning
Accession number :
edsair.doi...........fb4b1cb9d6a6fc9abc9631d3df4bd0e3