Back to Search Start Over

Achievements and Challenges in Explaining Deep Learning based Computer-Aided Diagnosis Systems

Authors :
Lucieri, Adriano
Bajwa, Muhammad Naseer
Dengel, Andreas
Ahmed, Sheraz
Publication Year :
2020

Abstract

Remarkable success of modern image-based AI methods and the resulting interest in their applications in critical decision-making processes has led to a surge in efforts to make such intelligent systems transparent and explainable. The need for explainable AI does not stem only from ethical and moral grounds but also from stricter legislation around the world mandating clear and justifiable explanations of any decision taken or assisted by AI. Especially in the medical context where Computer-Aided Diagnosis can have a direct influence on the treatment and well-being of patients, transparency is of utmost importance for safe transition from lab research to real world clinical practice. This paper provides a comprehensive overview of current state-of-the-art in explaining and interpreting Deep Learning based algorithms in applications of medical research and diagnosis of diseases. We discuss early achievements in development of explainable AI for validation of known disease criteria, exploration of new potential biomarkers, as well as methods for the subsequent correction of AI models. Various explanation methods like visual, textual, post-hoc, ante-hoc, local and global have been thoroughly and critically analyzed. Subsequently, we also highlight some of the remaining challenges that stand in the way of practical applications of AI as a clinical decision support tool and provide recommendations for the direction of future research.<br />Comment: 17 pages, 2 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2011.13169
Document Type :
Working Paper