Back to Search
Start Over
How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review.
- Source :
-
New Generation Computing . Dec2024, Vol. 42 Issue 5, p1165-1235. 71p. - Publication Year :
- 2024
-
Abstract
- Deep learning plays a significant role in developing a robust and constructive framework for tackling complex learning tasks. Consequently, it is widely utilized in many security-critical contexts, such as Self-Driving and Biometric Systems. Due to their complex structure, Deep Neural Networks (DNN) are vulnerable to adversarial attacks. Adversaries can deploy attacks at training or testing time and can cause significant security risks in safety–critical applications. Therefore, it is essential to comprehend adversarial attacks, their crafting methods, and different defending strategies. Moreover, finding effective defenses to malicious attacks that can promote robustness and provide additional security in deep learning models is critical. Therefore, there is a need to analyze the different challenges concerning deep learning models' robustness. The proposed work aims to present a systematic review of primary studies that focuses on providing an efficient and robust framework against adversarial attacks. This work used a standard SLR (Systematic Literature Review) method to review the studies from different digital libraries. In the next step, this work designed and answered several research questions thoroughly. The study classified several defensive strategies and discussed the major conflicting factors that can enhance robustness and efficiency. Moreover, the impact of adversarial attacks and their perturbation metrics are also analyzed for different defensive approaches. The findings of this study assist researchers and practitioners in choosing an appropriate defensive strategy by incorporating the considerations of varying research issues and recommendations. Finally, relying upon reviewed studies, this work found future directions for researchers to design robust and innovative solutions against adversarial attacks. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 02883635
- Volume :
- 42
- Issue :
- 5
- Database :
- Academic Search Index
- Journal :
- New Generation Computing
- Publication Type :
- Academic Journal
- Accession number :
- 180654180
- Full Text :
- https://doi.org/10.1007/s00354-024-00283-0