Back to Search Start Over

On Procedural Adversarial Noise Attack And Defense

Authors :
Yan, Jun
Deng, Xiaoyang
Yin, Huilin
Ge, Wancheng
Publication Year :
2021

Abstract

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images. Researchers have been devoted to promoting the research on the universal adversarial perturbations (UAPs) which are gradient-free and have little prior knowledge on data distributions. Procedural adversarial noise attack is a data-free universal perturbation generation method. In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions: Simplex noise and Worley noise. In our framework, the shading which disturbs visual classification is generated with rendering technology. Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.<br />Remove theoretical analysis and focus on the empirical study

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....755f427f56e006359e6cbda77f68647b