1. Benchmarking XAI Explanations with Human-Aligned Evaluations
- Author
-
Kazmierczak, Rémi, Azzolin, Steve, Berthier, Eloïse, Hedström, Anna, Delhomme, Patricia, Bousquet, Nicolas, Frehse, Goran, Mancini, Massimiliano, Caramiaux, Baptiste, Passerini, Andrea, and Franchi, Gianni
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction - Abstract
In this paper, we introduce PASTA (Perceptual Assessment System for explanaTion of Artificial intelligence), a novel framework for a human-centric evaluation of XAI techniques in computer vision. Our first key contribution is a human evaluation of XAI explanations on four diverse datasets (COCO, Pascal Parts, Cats Dogs Cars, and MonumAI) which constitutes the first large-scale benchmark dataset for XAI, with annotations at both the image and concept levels. This dataset allows for robust evaluation and comparison across various XAI methods. Our second major contribution is a data-based metric for assessing the interpretability of explanations. It mimics human preferences, based on a database of human evaluations of explanations in the PASTA-dataset. With its dataset and metric, the PASTA framework provides consistent and reliable comparisons between XAI techniques, in a way that is scalable but still aligned with human evaluations. Additionally, our benchmark allows for comparisons between explanations across different modalities, an aspect previously unaddressed. Our findings indicate that humans tend to prefer saliency maps over other explanation types. Moreover, we provide evidence that human assessments show a low correlation with existing XAI metrics that are numerically simulated by probing the model., Comment: https://github.com/ENSTA-U2IS-AI/Dataset_XAI
- Published
- 2024