1. Attacks based on malicious perturbations on image processing systems and defense methods against them
- Author
-
Dmitry A. Esipov, Abdulhamid Y. Buchaev, Akylzhan Kerimbay, Yana V. Puzikova, Semen K. Saidumarov, Nikita S. Sulimenko, Ilya Yu. Popov, and Nikolay S. Karmanovskiy
- Subjects
artificial intelligence ,artificial neural network ,image processing ,adversarial attack ,backdoor embedding ,adversarial perturbation ,adversarial learning ,defense distillation ,feature squeezing ,certified defense ,data preprocessing ,Optics. Light ,QC350-467 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Systems implementing artificial intelligence technologies have become widespread due to their effectiveness in solving various applied tasks including computer vision. Image processing through neural networks is also used in securitycritical systems. At the same time, the use of artificial intelligence is associated with characteristic threats including disruption of machine learning models. The phenomenon of triggering an incorrect neural network response by introducing perturbations that are visually imperceptible to a person was first described and attracted the attention of researchers in 2013. Methods of attacks on neural networks based on malicious perturbations have been continuously improved, ways of disrupting the operation of neural networks in processing various types of data and tasks of the target model have been proposed. The threat of disrupting the functioning of neural networks through these attacks has become a significant problem for systems implementing artificial intelligence technologies. Thus, research in the field of countering attacks based on malicious perturbations is very relevant. This article describes current attacks, provides an overview and comparative analysis of such attacks on image processing systems based on artificial intelligence. Approaches to the classification of attacks based on malicious perturbations are formulated. Defense methods against such attacks are considered, their shortcomings are revealed. The limitations of the applied defense methods that reduce the effectiveness of counteraction to attacks are shown. Approaches and practical measures to detect and eliminate harmful disturbances are proposed.
- Published
- 2023
- Full Text
- View/download PDF