Back to Search Start Over

Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions

Authors :
Löfström, Helena
Löfström, Tuwe
Szabadvary, Johan Hallberg
Publication Year :
2024

Abstract

This paper addresses a significant gap in explainable AI: the necessity of interpreting epistemic uncertainty in model explanations. Although current methods mainly focus on explaining predictions, with some including uncertainty, they fail to provide guidance on how to reduce the inherent uncertainty in these predictions. To overcome this challenge, we introduce new types of explanations that specifically target epistemic uncertainty. These include ensured explanations, which highlight feature modifications that can reduce uncertainty, and categorisation of uncertain explanations counter-potential, semi-potential, and super-potential which explore alternative scenarios. Our work emphasises that epistemic uncertainty adds a crucial dimension to explanation quality, demanding evaluation based not only on prediction probability but also on uncertainty reduction. We introduce a new metric, ensured ranking, designed to help users identify the most reliable explanations by balancing trade-offs between uncertainty, probability, and competing alternative explanations. Furthermore, we extend the Calibrated Explanations method, incorporating tools that visualise how changes in feature values impact epistemic uncertainty. This enhancement provides deeper insights into model behaviour, promoting increased interpretability and appropriate trust in scenarios involving uncertain predictions.<br />Comment: 35 pages, 11 figures, journal

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.05479
Document Type :
Working Paper