Back to Search Start Over

Evaluating the Interpretability of Generative Models by Interactive Reconstruction

Authors :
Ross, Andrew Slavin
Chen, Nina
Hang, Elisa Zhao
Glassman, Elena L.
Doshi-Velez, Finale
Publication Year :
2021

Abstract

For machine learning models to be most useful in numerous sociotechnical systems, many have argued that they must be human-interpretable. However, despite increasing interest in interpretability, there remains no firm consensus on how to measure it. This is especially true in representation learning, where interpretability research has focused on "disentanglement" measures only applicable to synthetic datasets and not grounded in human factors. We introduce a task to quantify the human-interpretability of generative model representations, where users interactively modify representations to reconstruct target instances. On synthetic datasets, we find performance on this task much more reliably differentiates entangled and disentangled models than baseline approaches. On a real dataset, we find it differentiates between representation learning methods widely believed but never shown to produce more or less interpretable models. In both cases, we ran small-scale think-aloud studies and large-scale experiments on Amazon Mechanical Turk to confirm that our qualitative and quantitative results agreed.<br />Comment: CHI 2021 accepted paper

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2102.01264
Document Type :
Working Paper