Back to Search Start Over

Can you trust your explanations? A robustness test for feature attribution methods

Authors :
Vascotto, Ilaria
Rodriguez, Alex
Bonaita, Alessandro
Bortolussi, Luca
Publication Year :
2024

Abstract

The increase of legislative concerns towards the usage of Artificial Intelligence (AI) has recently led to a series of regulations striving for a more transparent, trustworthy and accountable AI. Along with these proposals, the field of Explainable AI (XAI) has seen a rapid growth but the usage of its techniques has at times led to unexpected results. The robustness of the approaches is, in fact, a key property often overlooked: it is necessary to evaluate the stability of an explanation (to random and adversarial perturbations) to ensure that the results are trustable. To this end, we propose a test to evaluate the robustness to non-adversarial perturbations and an ensemble approach to analyse more in depth the robustness of XAI methods applied to neural networks and tabular datasets. We will show how leveraging manifold hypothesis and ensemble approaches can be beneficial to an in-depth analysis of the robustness.<br />Comment: 8 pages, 3 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.14349
Document Type :
Working Paper