Back to Search Start Over

AdverseGen: A Practical Tool for Generating Adversarial Examples to Deep Neural Networks Using Black-Box Approaches

Authors :
Zhang, K
Wu, K
Chen, S
Zhao, Y
Yao, X
Zhang, K
Wu, K
Chen, S
Zhao, Y
Yao, X
Publication Year :
2021

Abstract

Deep neural networks are fragile as they are easily fooled by inputs with deliberate perturbations, which are key concerns in image security issues. Given a trained neural network, we are always curious about whether the neural network has learned the concept that we’d like it to learn. We want to know whether there might be some vulnerabilities of the neural network that could be exploited by hackers. It could be useful if there is a tool that can be used by non-experts to test a trained neural network and try to find its vulnerabilities. In this paper, we introduce a tool named AdverseGen for generating adversarial examples to a trained deep neural network using the black-box approach, i.e., without using any information about the neural network architecture and its gradient information. Our tool provides customized adversarial attacks for both non-professional users and developers. It can be invoked by a graphical user interface or command line mode to launch adversarial attacks. Moreover, this tool supports different attack goals (targeted, non-targeted) and different distance metrics.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1345555577
Document Type :
Electronic Resource