Back to Search Start Over

Improving Decision Sparsity

Authors :
Sun, Yiyang
Wang, Tong
Rudin, Cynthia
Publication Year :
2024

Abstract

Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision-making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of decision sparsity called the Sparse Explanation Value(SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models.<br />Comment: Accepted to 38th Conference on Neural Information Processing Systems (NeurIPS 2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.20483
Document Type :
Working Paper