Back to Search Start Over

Interpretability Needs a New Paradigm

Authors :
Madsen, Andreas
Lakkaraju, Himabindu
Reddy, Siva
Chandar, Sarath
Madsen, Andreas
Lakkaraju, Himabindu
Reddy, Siva
Chandar, Sarath
Publication Year :
2024

Abstract

Interpretability is the study of explaining models in understandable terms to humans. At present, interpretability is divided into two paradigms: the intrinsic paradigm, which believes that only models designed to be explained can be explained, and the post-hoc paradigm, which believes that black-box models can be explained. At the core of this debate is how each paradigm ensures its explanations are faithful, i.e., true to the model's behavior. This is important, as false but convincing explanations lead to unsupported confidence in artificial intelligence (AI), which can be dangerous. This paper's position is that we should think about new paradigms while staying vigilant regarding faithfulness. First, by examining the history of paradigms in science, we see that paradigms are constantly evolving. Then, by examining the current paradigms, we can understand their underlying beliefs, the value they bring, and their limitations. Finally, this paper presents 3 emerging paradigms for interpretability. The first paradigm designs models such that faithfulness can be easily measured. Another optimizes models such that explanations become faithful. The last paradigm proposes to develop models that produce both a prediction and an explanation.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438555120
Document Type :
Electronic Resource