Back to Search Start Over

Manipulating Feature Visualizations with Gradient Slingshots

Authors :
Bareeva, Dilyara
Höhne, Marina M. -C.
Warnecke, Alexander
Pirch, Lukas
Müller, Klaus-Robert
Rieck, Konrad
Bykov, Kirill
Publication Year :
2024

Abstract

Deep Neural Networks (DNNs) are capable of learning complex and versatile representations, however, the semantic nature of the learned concepts remains unknown. A common method used to explain the concepts learned by DNNs is Feature Visualization (FV), which generates a synthetic input signal that maximally activates a particular neuron in the network. In this paper, we investigate the vulnerability of this approach to adversarial model manipulations and introduce a novel method for manipulating FV without significantly impacting the model's decision-making process. The key distinction of our proposed approach is that it does not alter the model architecture. We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons by masking the original explanations of neurons with chosen target explanations during model auditing.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.06122
Document Type :
Working Paper