Back to Search Start Over

Contrastive Explanations with Local Foil Trees

Authors :
van der Waa, Jasper
Robeer, Marcel
van Diggelen, Jurriaan
Brinkhuis, Matthieu
Neerincx, Mark
Publication Year :
2018

Abstract

Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks. However, in a high-dimensional feature space this approach may become unfeasible without restraining the set of important features. We propose to utilize the human tendency to ask questions like "Why this output (the fact) instead of that output (the foil)?" to reduce the number of features to those that play a main role in the asked contrast. Our proposed method utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact. In this study we illustrate this approach on three benchmark classification tasks.<br />Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1806.07470
Document Type :
Working Paper