Back to Search Start Over

Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality.

Authors :
ElShawi, Radwa
Al-Mallah, Mouaz H.
Source :
Journal of Artificial Intelligence Research; 2022, Vol. 75, p833-855, 23p
Publication Year :
2022

Abstract

Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models' predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. Another category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain predictions. These explanations are hampered by how concepts are constructed, which is not intrinsically interpretable. To this end, we propose Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features. CLEF maps the raw input features to high-level intuitive concepts and then decompose the evidence of prediction of the instance being explained into concepts. In addition, the proposed framework generates counterfactual explanations, suggesting the minimum changes in the instance's concept based explanation that will lead to a different prediction. We demonstrate with simulated user feedback on predicting the risk of mortality. Such direct feedback is more effective than other techniques, that rely on hand-labelled or automatically extracted concepts, in learning concepts that align with ground truth concept definitions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10769757
Volume :
75
Database :
Complementary Index
Journal :
Journal of Artificial Intelligence Research
Publication Type :
Academic Journal
Accession number :
161927377
Full Text :
https://doi.org/10.1613/jair.1.14019