Back to Search Start Over

Evaluating the necessity of the multiple metrics for assessing explainable AI: A critical examination.

Authors :
Pawlicki, Marek
Pawlicka, Aleksandra
Uccello, Federica
Szelest, Sebastian
D'Antonio, Salvatore
Kozik, Rafał
Choraś, Michał
Source :
Neurocomputing. Oct2024, Vol. 602, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

This paper investigates the specific properties of Explainable Artificial Intelligence (xAI), particularly when implemented in AI/ML models across high-stakes sectors, in this case cybersecurity. The authors execute a comprehensive systematic review of xAI properties, various evaluation metrics, and existing frameworks to assess their utility and relevance. Subsequently, the experimental sections evaluate selected xAI techniques against these metrics, delivering key insights into their practical utility and effectiveness. The findings highlight that the proliferation of metrics enhances the understanding of xAI systems but simultaneously exposes challenges such as metric duplication, inefficacy, and confusion. These issues underscore the pressing need for standardized evaluation frameworks to streamline their application and strengthen their effectiveness, thereby improving the overall utility of xAI in critical domains. • Bridging xAI theory and practice. • Systematic review of xAI metrics and frameworks. • Experimental evaluation of various xAI explanations. • The results show many metrics are ine2ective. • The abundance of metrics has pros and cons. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
602
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
179061294
Full Text :
https://doi.org/10.1016/j.neucom.2024.128282