1. Explainability in AI Based Applications: A Framework for Comparing Different Techniques
- Author
-
Grobrugge, Arne, Mishra, Nidhi, Jakubik, Johannes, and Satzger, Gerhard
- Subjects
Computer Science - Artificial Intelligence ,14J60 (Primary) 14F05, 14J26 (Secondary) ,F.2.2 ,I.2.7 - Abstract
The integration of artificial intelligence into business processes has significantly enhanced decision-making capabilities across various industries such as finance, healthcare, and retail. However, explaining the decisions made by these AI systems poses a significant challenge due to the opaque nature of recent deep learning models, which typically function as black boxes. To address this opacity, a multitude of explainability techniques have emerged. However, in practical business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy. This paper addresses the practical need of understanding differences in the output of explainability techniques by proposing a novel method for the assessment of the agreement of different explainability techniques. Based on our proposed methods, we provide a comprehensive comparative analysis of six leading explainability techniques to help guiding the selection of such techniques in practice. Our proposed general-purpose method is evaluated on top of one of the most popular deep learning architectures, the Vision Transformer model, which is frequently employed in business applications. Notably, we propose a novel metric to measure the agreement of explainability techniques that can be interpreted visually. By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications., Comment: 10 pages, 5 figures
- Published
- 2024
- Full Text
- View/download PDF