1. Unifying Interpretability and Explainability for Alzheimer's Disease Progression Prediction
- Author
-
Ali, Raja Farrukh, Milani, Stephanie, Woods, John, Adenij, Emmanuel, Farooq, Ayesha, Mansel, Clayton, Burns, Jeffrey, and Hsu, William
- Subjects
Computer Science - Machine Learning - Abstract
Reinforcement learning (RL) has recently shown promise in predicting Alzheimer's disease (AD) progression due to its unique ability to model domain knowledge. However, it is not clear which RL algorithms are well-suited for this task. Furthermore, these methods are not inherently explainable, limiting their applicability in real-world clinical scenarios. Our work addresses these two important questions. Using a causal, interpretable model of AD, we first compare the performance of four contemporary RL algorithms in predicting brain cognition over 10 years using only baseline (year 0) data. We then apply SHAP (SHapley Additive exPlanations) to explain the decisions made by each algorithm in the model. Our approach combines interpretability with explainability to provide insights into the key factors influencing AD progression, offering both global and individual, patient-level analysis. Our findings show that only one of the RL methods is able to satisfactorily model disease progression, but the post-hoc explanations indicate that all methods fail to properly capture the importance of amyloid accumulation, one of the pathological hallmarks of Alzheimer's disease. Our work aims to merge predictive accuracy with transparency, assisting clinicians and researchers in enhancing disease progression modeling for informed healthcare decisions. Code is available at https://github.com/rfali/xrlad., Comment: Previous versions accepted to NeurIPS 2023's XAIA and AAAI 2024's XAI4DRL workshops
- Published
- 2024