Back to Search Start Over

Shapley Values for Feature Selection: The Good, the Bad, and the Axioms

Authors :
Daniel Fryer
Inga Strumke
Hien Nguyen
Source :
IEEE Access, Vol 9, Pp 144352-144360 (2021)
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a large extent, to a solid theoretical foundation, including four “favourable and fair” axioms for attribution in transferable utility games. The Shapley value is probably the only solution concept satisfying these axioms. In this paper, we introduce the Shapley value and draw attention to its recent uses as a feature selection tool. We call into question this use of the Shapley value, using simple, abstract “toy” counterexamples to illustrate that the axioms may work against the goals of feature selection. From this, we develop a number of insights that are then investigated in concrete simulation settings, with a variety of Shapley value formulations, including SHapley Additive exPlanations (SHAP) and Shapley Additive Global importancE (SAGE). The aim is not to encourage any use of the Shapley value for feature selection, but we aim to clarify various limitations around their current use in the literature. In so doing, we hope to help demystify certain aspects of the Shapley value axioms that are viewed as “favourable”. In particular, we wish to highlight that the favourability of the axioms depends non-trivially on the way in which the Shapley value is appropriated in the XAI application.

Details

Language :
English
ISSN :
21693536
Volume :
9
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.7c55ac4c4b9045fe8205b74547200e28
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2021.3119110