Back to Search Start Over

Explainable Adversarial Attacks on Coarse-to-Fine Classifiers

Authors :
Heidarizadeh, Akram
Hatfield, Connor
Lazzarotto, Lorenzo
Cai, HanQin
Atia, George
Publication Year :
2025

Abstract

Traditional adversarial attacks typically aim to alter the predicted labels of input images by generating perturbations that are imperceptible to the human eye. However, these approaches often lack explainability. Moreover, most existing work on adversarial attacks focuses on single-stage classifiers, but multi-stage classifiers are largely unexplored. In this paper, we introduce instance-based adversarial attacks for multi-stage classifiers, leveraging Layer-wise Relevance Propagation (LRP), which assigns relevance scores to pixels based on their influence on classification outcomes. Our approach generates explainable adversarial perturbations by utilizing LRP to identify and target key features critical for both coarse and fine-grained classifications. Unlike conventional attacks, our method not only induces misclassification but also enhances the interpretability of the model's behavior across classification stages, as demonstrated by experimental results.<br />Comment: ICASSP 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.10906
Document Type :
Working Paper