Back to Search
Start Over
Improving adversarial robustness by learning shared information.
- Source :
-
Pattern Recognition . Feb2023, Vol. 134, pN.PAG-N.PAG. 1p. - Publication Year :
- 2023
-
Abstract
- • Inspired by multi-view representation learning, we propose a scheme casting adversarial examples as a secondary view. • We propose and analyze our loss for learning representations with shared information between clean and adversarial samples. • We demonstrate that our method achieves improved robust vs. natural accuracy tradeoffs over several attacks and datasets. We consider the problem of improving the adversarial robustness of neural networks while retaining natural accuracy. Motivated by the multi-view information bottleneck formalism, we seek to learn a representation that captures the shared information between clean samples and their corresponding adversarial samples while discarding these samples' view-specific information. We show that this approach leads to a novel multi-objective loss function, and we provide mathematical motivation for its components towards improving the robust vs. natural accuracy tradeoff. We demonstrate enhanced tradeoff compared to current state-of-the-art methods with extensive evaluation on various benchmark image datasets and architectures. Ablation studies indicate that learning shared representations is key to improving performance. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ARTIFICIAL neural networks
Subjects
Details
- Language :
- English
- ISSN :
- 00313203
- Volume :
- 134
- Database :
- Academic Search Index
- Journal :
- Pattern Recognition
- Publication Type :
- Academic Journal
- Accession number :
- 160172301
- Full Text :
- https://doi.org/10.1016/j.patcog.2022.109054