Back to Search Start Over

XAI-TRIS: non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance.

Authors :
Clark, Benedict
Wilming, Rick
Haufe, Stefan
Source :
Machine Learning; Sep2024, Vol. 113 Issue 9, p6871-6910, 40p
Publication Year :
2024

Abstract

The field of 'explainable' artificial intelligence (XAI) has produced highly acclaimed methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features. Yet, a lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method and has also so far hindered the theoretical verification and empirical validation of XAI methods. This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies. Here, we craft benchmark datasets for one linear and three different non-linear classification scenarios, in which the important class-conditional features are known by design, serving as ground truth explanations. Using novel quantitative metrics, we benchmark the explanation performance of a wide set of XAI methods across three deep learning model architectures. We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods, attributing false-positive importance to features with no statistical relationship to the prediction target rather than truly important features. Moreover, we demonstrate that explanations derived from different model architectures can be vastly different; thus, prone to misinterpretation even under controlled conditions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08856125
Volume :
113
Issue :
9
Database :
Complementary Index
Journal :
Machine Learning
Publication Type :
Academic Journal
Accession number :
178877153
Full Text :
https://doi.org/10.1007/s10994-024-06574-3