Back to Search
Start Over
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
- Source :
- IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 3726-3732, 2023
- Publication Year :
- 2023
-
Abstract
- As deep learning models increasingly find applications in critical domains such as medical imaging, the need for transparent and trustworthy decision-making becomes paramount. Many explainability methods provide insights into how these models make predictions by attributing importance to input features. As Vision Transformer (ViT) becomes a promising alternative to convolutional neural networks for image classification, its interpretability remains an open research question. This paper investigates the performance of various interpretation methods on a ViT applied to classify chest X-ray images. We introduce the notion of evaluating faithfulness, sensitivity, and complexity of ViT explanations. The obtained results indicate that Layerwise relevance propagation for transformers outperforms Local interpretable model-agnostic explanations and Attention visualization, providing a more accurate and reliable representation of what a ViT has actually learned. Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.<br />Comment: Accepted by XAI4CV Workshop at CVPR 2023
Details
- Database :
- arXiv
- Journal :
- IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 3726-3732, 2023
- Publication Type :
- Report
- Accession number :
- edsarx.2304.06133
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/CVPRW59228.2023.00383