Back to Search Start Over

DORA: Exploring Outlier Representations in Deep Neural Networks

Authors :
Bykov, Kirill
Deb, Mayukh
Grinwald, Dennis
Müller, Klaus-Robert
Höhne, Marina M. -C.
Source :
Published in Transactions on Machine Learning Research (06/2023)
Publication Year :
2022

Abstract

Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal representations. However, the concepts they learn remain opaque, a problem that becomes particularly acute when models unintentionally learn spurious correlations. In this work, we present DORA (Data-agnOstic Representation Analysis), the first data-agnostic framework for analyzing the representational space of DNNs. Central to our framework is the proposed Extreme-Activation (EA) distance measure, which assesses similarities between representations by analyzing their activation patterns on data points that cause the highest level of activation. As spurious correlations often manifest in features of data that are anomalous to the desired task, such as watermarks or artifacts, we demonstrate that internal representations capable of detecting such artifactual concepts can be found by analyzing relationships within neural representations. We validate the EA metric quantitatively, demonstrating its effectiveness both in controlled scenarios and real-world applications. Finally, we provide practical examples from popular Computer Vision models to illustrate that representations identified as outliers using the EA metric often correspond to undesired and spurious concepts.<br />Comment: 24 pages, 18 figures

Details

Database :
arXiv
Journal :
Published in Transactions on Machine Learning Research (06/2023)
Publication Type :
Report
Accession number :
edsarx.2206.04530
Document Type :
Working Paper