Back to Search
Start Over
Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity.
- Source :
- ACM Computing Surveys; Aug2023, Vol. 55 Issue 8, p1-39, 39p
- Publication Year :
- 2023
-
Abstract
- The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model's final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 03600300
- Volume :
- 55
- Issue :
- 8
- Database :
- Complementary Index
- Journal :
- ACM Computing Surveys
- Publication Type :
- Academic Journal
- Accession number :
- 161028303
- Full Text :
- https://doi.org/10.1145/3547330