Back to Search Start Over

Boosting adversarial robustness via feature refinement, suppression, and alignment.

Authors :
Wu, Yulun
Guo, Yanming
Chen, Dongmei
Yu, Tianyuan
Xiao, Huaxin
Guo, Yuanhao
Bai, Liang
Source :
Complex & Intelligent Systems; Jun2024, Vol. 10 Issue 3, p3213-3233, 21p
Publication Year :
2024

Abstract

Deep neural networks are vulnerable to adversarial attacks, bringing high risk to numerous security-critical applications. Existing adversarial defense algorithms primarily concentrate on optimizing adversarial training strategies to improve the robustness of neural networks, but ignore that the misguided decisions are essentially made by the activation values. Besides, such conventional strategies normally result in a great decline in clean accuracy. To address the above issues, we propose a novel RSA algorithm to counteract adversarial perturbations while maintaining clean accuracy. Specifically, RSA comprises three distinct modules: feature refinement, activation suppression, and alignment modules. First, the feature refinement module refines malicious activation values in the feature space. Subsequently, the feature activation suppression module mitigates redundant activation values induced by adversarial perturbations across both channel and spatial dimensions. Finally, to avoid an excessive performance drop on clean samples, RSA incorporates a consistency constraint and a knowledge distillation constraint for feature alignment. Extensive experiments on five public datasets and three backbone networks demonstrate that our proposed algorithm achieves consistently superior performance in both adversarial robustness and clean accuracy over the state-of-the-art. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
21994536
Volume :
10
Issue :
3
Database :
Complementary Index
Journal :
Complex & Intelligent Systems
Publication Type :
Academic Journal
Accession number :
177309370
Full Text :
https://doi.org/10.1007/s40747-023-01311-0