Back to Search Start Over

Building and Interpreting Deep Similarity Models

Authors :
Eberle, Oliver
Büttner, Jochen
Kräutli, Florian
Müller, Klaus-Robert
Valleriani, Matteo
Montavon, Grégoire
Eberle, Oliver
Büttner, Jochen
Kräutli, Florian
Müller, Klaus-Robert
Valleriani, Matteo
Montavon, Grégoire
Publication Year :
2020

Abstract

Many learning algorithms such as kernel machines, nearest neighbors, clustering, or anomaly detection, are based on the concept of 'distance' or 'similarity'. Before similarities are used for training an actual machine learning model, we would like to verify that they are bound to meaningful patterns in the data. In this paper, we propose to make similarities interpretable by augmenting them with an explanation in terms of input features. We develop BiLRP, a scalable and theoretically founded method to systematically decompose similarity scores on pairs of input features. Our method can be expressed as a composition of LRP explanations, which were shown in previous works to scale to highly nonlinear functions. Through an extensive set of experiments, we demonstrate that BiLRP robustly explains complex similarity models, e.g. built on VGG-16 deep neural network features. Additionally, we apply our method to an open problem in digital humanities: detailed assessment of similarity between historical documents such as astronomical tables. Here again, BiLRP provides insight and brings verifiability into a highly engineered and problem-specific similarity model.<br />Comment: 12 pages, 10 figures

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1228395345
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1109.TPAMI.2020.3020738