Back to Search Start Over

SD-NAE: Generating Natural Adversarial Examples with Stable Diffusion

Authors :
Lin, Yueqian
Zhang, Jingyang
Chen, Yiran
Li, Hai
Publication Year :
2023

Abstract

Natural Adversarial Examples (NAEs), images arising naturally from the environment and capable of deceiving classifiers, are instrumental in robustly evaluating and identifying vulnerabilities in trained models. In this work, unlike prior works that passively collect NAEs from real images, we propose to actively synthesize NAEs using the state-of-the-art Stable Diffusion. Specifically, our method formulates a controlled optimization process, where we perturb the token embedding that corresponds to a specified class to generate NAEs. This generation process is guided by the gradient of loss from the target classifier, ensuring that the created image closely mimics the ground-truth class yet fools the classifier. Named SD-NAE (Stable Diffusion for Natural Adversarial Examples), our innovative method is effective in producing valid and useful NAEs, which is demonstrated through a meticulously designed experiment. Code is available at https://github.com/linyueqian/SD-NAE.<br />Comment: Accepted by ICLR 2024 TinyPapers

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.12981
Document Type :
Working Paper