Back to Search Start Over

Evaluating the Explainable AI Method Grad-CAM for Breath Classification on Newborn Time Series Data

Authors :
Oprea, Camelia
Grüne, Mike
Buglowski, Mateusz
Olivier, Lena
Orlikowsky, Thorsten
Kowalewski, Stefan
Schoberer, Mark
Stollenwerk, André
Publication Year :
2024

Abstract

With the digitalization of health care systems, artificial intelligence becomes more present in medicine. Especially machine learning shows great potential for complex tasks such as time series classification, usually at the cost of transparency and comprehensibility. This leads to a lack of trust by humans and thus hinders its active usage. Explainable artificial intelligence tries to close this gap by providing insight into the decision-making process, the actual usefulness of its different methods is however unclear. This paper proposes a user study based evaluation of the explanation method Grad-CAM with application to a neural network for the classification of breaths in time series neonatal ventilation data. We present the perceived usefulness of the explainability method by different stakeholders, exposing the difficulty to achieve actual transparency and the wish for more in-depth explanations by many of the participants.<br />Comment: \c{opyright} 2024 The authors. This work has been accepted to IFAC for publication under a Creative Commons Licence CC-BY-NC-ND. Accepted for the 12th IFAC Symposium on Biological and Medical Systems. 6 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.07590
Document Type :
Working Paper