Back to Search Start Over

Pre-trained Multiple Latent Variable Generative Models are good defenders against Adversarial Attacks

Authors :
Serez, Dario
Cristani, Marco
Del Bue, Alessio
Murino, Vittorio
Morerio, Pietro
Publication Year :
2024

Abstract

Attackers can deliberately perturb classifiers' input with subtle noise, altering final predictions. Among proposed countermeasures, adversarial purification employs generative networks to preprocess input images, filtering out adversarial noise. In this study, we propose specific generators, defined Multiple Latent Variable Generative Models (MLVGMs), for adversarial purification. These models possess multiple latent variables that naturally disentangle coarse from fine features. Taking advantage of these properties, we autoencode images to maintain class-relevant information, while discarding and re-sampling any detail, including adversarial noise. The procedure is completely training-free, exploring the generalization abilities of pre-trained MLVGMs on the adversarial purification downstream task. Despite the lack of large models, trained on billions of samples, we show that smaller MLVGMs are already competitive with traditional methods, and can be used as foundation models. Official code released at https://github.com/SerezD/gen_adversarial.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.03453
Document Type :
Working Paper